這期內(nèi)容當(dāng)中小編將會(huì)給大家?guī)?lái)有關(guān)怎么在Dlib中使用OpenCV實(shí)現(xiàn)人臉識(shí)別,文章內(nèi)容豐富且以專業(yè)的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
武都網(wǎng)站建設(shè)公司成都創(chuàng)新互聯(lián),武都網(wǎng)站設(shè)計(jì)制作,有大型網(wǎng)站制作公司豐富經(jīng)驗(yàn)。已為武都成百上千家提供企業(yè)網(wǎng)站建設(shè)服務(wù)。企業(yè)網(wǎng)站搭建\成都外貿(mào)網(wǎng)站建設(shè)公司要多少錢,請(qǐng)找那個(gè)售后服務(wù)好的武都做網(wǎng)站的公司定做!人臉數(shù)據(jù)庫(kù)導(dǎo)入
人臉數(shù)據(jù)導(dǎo)入,也就是說(shuō)我在系統(tǒng)啟動(dòng)之初,需要導(dǎo)入我的人臉數(shù)據(jù)庫(kù),也就是前面的那些明星的正面照。裝載的開始階段,因?yàn)橐獧z測(cè)靜態(tài)人臉圖片的人臉部位,首先需要用dlib的人臉檢測(cè)器,用get_frontal_face_detector()獲得。然后需要將68點(diǎn)人臉標(biāo)記模型導(dǎo)入shape_predictor sp,目的就是要對(duì)其人臉到一個(gè)標(biāo)準(zhǔn)的姿勢(shì),接著就是裝載DNN模型。然后取每張人臉照片的特征,并將特征和姓名等相關(guān)的信息放入FACE_DESC結(jié)構(gòu)中,最后將每張人臉信息結(jié)構(gòu)放入face_desc_vec容器中,這里我只裝載了9個(gè)明星的人臉信息。
int FACE_RECOGNITION::load_db_faces(void) { intrc = -1; longhFile = 0; struct_finddata_tfileinfo; frontal_face_detectordetector =get_frontal_face_detector(); // We will also use a face landmarking model to align faces to a standard pose: (see face_landmark_detection_excpp for an introduction) deserialize("shape_predictor_68_face_landmarksdat") >>sp; // And finally we load the DNN responsible for face recognition deserialize("dlib_face_recognition_resnet_model_vdat") >>net; if ((hFile =_findfirst("\\faces\\*jpg", &fileinfo)) != -1) { do { if ((fileinfoattrib &_A_ARCH)) { if (strcmp(fileinfoname,"") != 0 && strcmp(fileinfoname,"") != 0) { if (!strcmp(strstr(fileinfoname,"") + 1 , "jpg")) { cout <<"This file is an image file!" <<fileinfoname <<endl; matrix<rgb_pixel>img; charpath[260]; sprintf_s(path,"\\faces\\%s",fileinfoname); load_image(img,path); image_windowwin(img); for (autoface :detector(img)) { autoshape =sp(img,face); matrix<rgb_pixel>face_chip; extract_image_chip(img,get_face_chip_details(shape, 150, 25),face_chip); //Record the all this face's information FACE_DESCsigle_face; sigle_faceface_chip =face_chip; sigle_facename =fileinfoname; std::vector<matrix<rgb_pixel>>face_chip_vec; std::vector<matrix<float, 0, 1>>face_all; face_chip_vecpush_back(move(face_chip)); //Asks the DNN to convert each face image in faces into a 128D vector face_all =net(face_chip_vec); //Get the feature of this person std::vector<matrix<float, 0, 1>>::iteratoriter_begin = face_allbegin(), iter_end =face_allend(); if (face_allsize() > 1)break; sigle_faceface_feature = *iter_begin; //all the person description into vector face_desc_vecpush_back(sigle_face); winadd_overlay(face); } } else { cout <<"This file is not image file!" <<fileinfoname <<endl; } } } else { //filespush_back(passign(path)append("\\")append(fileinfoname)); } } while (_findnext(hFile, &fileinfo) == 0); _findclose(hFile); } returnrc; }
人臉檢測(cè)
人臉檢測(cè)在人臉識(shí)別的應(yīng)用系統(tǒng)中我認(rèn)為是至關(guān)重要的一環(huán),因?yàn)槿四槞z測(cè)的好壞直接影響最終的識(shí)別率,如果在人臉檢測(cè)階段能做到盡量好的話,系統(tǒng)的識(shí)別率會(huì)有一個(gè)比較大的提升。下面的是人臉檢測(cè)的具體代碼實(shí)現(xiàn)(很簡(jiǎn)陋莫怪),嘗試了用Dlib人臉檢測(cè),OpenCV人臉檢測(cè),還有于仕琪的libfacedetection,比較發(fā)現(xiàn)于仕琪的libfacedetection是做人臉檢測(cè)最好的一個(gè),速度快,并且檢測(cè)圖像效果也很好。
intcapture_face(Matframe,Mat&out) { Matgray; Matface; intrc = -1; if (frame.empty() || !frame.data)return -1; cvtColor(frame,gray,CV_BGR2GRAY); int *pResults =NULL; unsignedchar *pBuffer = (unsignedchar *)malloc(DETECT_BUFFER_SIZE); if (!pBuffer) { fprintf(stderr,"Can not alloc buffer.\n"); return -1; } //pResults = facedetect_frontal_tmp((unsigned char*)(gray.ptr(0)), gray.cols, gray.rows, gray.step, // 1.2f, 5, 24); pResults =facedetect_multiview_reinforce(pBuffer, (unsignedchar*)(gray.ptr(0)),gray.cols,gray.rows, (int)gray.step, 1.2f, 2, 48, 0, 1); //printf("%d faces detected.\n", (pResults ? *pResults : 0));//重復(fù)運(yùn)行 //print the detection results if (pResults !=NULL) { for (inti = 0;i < (pResults ? *pResults : 0);i++) { short *p = ((short*)(pResults + 1)) + 6 *i; intx =p[0]; inty =p[1]; intw =p[2]; inth =p[3]; intneighbors =p[4]; Rect_<float>face_rect =Rect_<float>(x,y,w, h); face =frame(face_rect); printf("face_rect=[%d, %d, %d, %d], neighbors=%d\n",x,y, w,h,neighbors); Pointleft(x,y); Pointright(x +w,y + h); cv::rectangle(frame,left,right, Scalar(230, 255, 0), 4); } //imshow("frame", frame); if (face.empty() || !face.data) { face_detect_count = 0; return -1; } if (face_detect_count++ > 30) { imshow("face",face); out =face.clone(); return 0; } } else { //face is moving, and reset the detect count face_detect_count = 0; } returnrc; }
人臉識(shí)別
通過(guò)人臉檢測(cè)函數(shù)capture_face()經(jīng)過(guò)處理之后臨時(shí)保存在工程目錄下的cap.jpg,用get_face_chip_details()函數(shù)將檢測(cè)到的目標(biāo)圖片標(biāo)準(zhǔn)化為150*150像素大小,并對(duì)人臉進(jìn)行旋轉(zhuǎn)居中,用extract_image_chip()取得圖像的一個(gè)拷貝,然后將其存儲(chǔ)到自己的圖片face_chip中,把的到face_chip放入vect_faces容器中,傳送給深度神經(jīng)網(wǎng)絡(luò)net,得到捕捉到人臉圖片的128D向量特征。最后在事先導(dǎo)入的人臉數(shù)據(jù)庫(kù)中遍歷與此特征最相近的人臉即可識(shí)別出相應(yīng)的人臉信息。
這種模式的應(yīng)用,也就是我們所說(shuō)的1:N應(yīng)用,1對(duì)N是比較考驗(yàn)系統(tǒng)運(yùn)算能力的,舉個(gè)例子,現(xiàn)在支付寶賬戶應(yīng)該已經(jīng)是上億級(jí)別的用戶,如果你在就餐的時(shí)候選擇使用支付寶人臉支付,也許在半個(gè)小時(shí)內(nèi)服務(wù)器也沒(méi)有找你的臉,這下就悲催,當(dāng)然在真實(shí)應(yīng)用場(chǎng)景可能是還需要你輸入你的名字,這下可能就快多了,畢竟全國(guó)可能和你重名的也就了不的幾千上萬(wàn)個(gè)吧,一搜索,人臉識(shí)別再一驗(yàn)證即可。
前面的這些還沒(méi)有考慮安全的因素,比如說(shuō)雙胞胎啊,化妝?。ňW(wǎng)紅的年代?。?,還有年齡的因素,環(huán)境的因素還包括光照、角度等導(dǎo)致的誤識(shí)別或是識(shí)別不出,識(shí)別不出的情況還好,如果是誤識(shí)別對(duì)于支付等對(duì)于安全性要求極其嚴(yán)苛的應(yīng)用來(lái)說(shuō)簡(jiǎn)直就是災(zāi)難。所以人臉識(shí)別還有很大的局限性 – 額,好像扯遠(yuǎn)了。
matrix<rgb_pixel> face_cap; //save the capture in the project directory load_image(face_cap, ".\\cap.jpg"); //Display the raw image on the screen image_window win1(face_cap); frontal_face_detector detector = get_frontal_face_detector(); std::vector<matrix<rgb_pixel>> vect_faces; for (auto face : detector(face_cap)) { auto shape = face_recognize.sp(face_cap, face); matrix<rgb_pixel> face_chip; extract_image_chip(face_cap, get_face_chip_details(shape, 150, 0.25), face_chip); vect_faces.push_back(move(face_chip)); win1.add_overlay(face); } if (vect_faces.size() != 1) { cout <<"Capture face error! face number "<< vect_faces.size() << endl; cap.release(); goto CAPTURE; } //Use DNN and get the capture face's feature with 128D vector std::vector<matrix<float, 0, 1>> face_cap_desc = face_recognize.net(vect_faces); //Browse the face feature from the database, and find the match one std::pair<double,std::string> candidate_face; std::vector<double> len_vec; std::vector<std::pair<double, std::string>> candi_face_vec; candi_face_vec.reserve(256); for (size_t i = 0; i < face_recognize.face_desc_vec.size(); ++i) { auto len = length(face_cap_desc[0] - face_recognize.face_desc_vec[i].face_feature); if (len < 0.45) { len_vec.push_back(len); candidate_face.first = len; candidate_face.second = face_recognize.face_desc_vec[i].name.c_str(); candi_face_vec.push_back(candidate_face); #ifdef _FACE_RECOGNIZE_DEBUG char buffer[256] = {0}; sprintf_s(buffer, "Candidate face %s Euclid length %f", face_recognize.face_desc_vec[i].name.c_str(), len); MessageBox(CString(buffer), NULL, MB_YESNO); #endif } else { cout << "This face from database is not match the capture face, continue!" << endl; } } //Find the most similar face if (len_vec.size() != 0) { shellSort(len_vec); int i(0); for (i = 0; i != len_vec.size(); i++) { if (len_vec[0] == candi_face_vec[i].first) break; } char buffer[256] = { 0 }; sprintf_s(buffer, "The face is %s -- Euclid length %f", candi_face_vec[i].second.c_str(), candi_face_vec[i].first); if (MessageBox(CString(buffer), NULL, MB_YESNO) == IDNO) { face_record(); } } else { if (MessageBox(CString("Not the similar face been found"), NULL, MB_YESNO) == IDYES) { face_record(); } } face_detect_count = 0; frame.release(); face.release();
異常處理
當(dāng)人臉或是物體快速的在攝像頭前活動(dòng)時(shí),會(huì)導(dǎo)致系統(tǒng)異常拋出,異常提示如下:
對(duì)于這個(gè)問(wèn)題,我們可以先用C++捕獲異常的工具,try和catch工具來(lái)捕獲異常:
Mat frame; Mat face; VideoCapture cap(0); if (!cap.isOpened()) { AfxMessageBox(_T("Please check your USB camera's interface num.")); } try { while (1) { check_close(cap); cap >> frame; if (!frame.empty()) { if (capture_face(frame, face) == 0) { //convert to IplImage format and then save with .jpg format IplImage face_Img; face_Img = IplImage(face); //save the capture face to the project directory cvSaveImage("./cap.jpg", &face_Img); break; } imshow("view", frame); } int c = waitKey(10); if ((char)c == 'c') { break; } } } catch (exception& e) { cout << "\nexception thrown!" << endl; cout << e.what() << endl; #ifdef _CAPTURE_DEBUG MessageBox(CString(e.what()), NULL, MB_YESNO); #endif goto CAPTURE; }
上述就是小編為大家分享的怎么在Dlib中使用OpenCV實(shí)現(xiàn)人臉識(shí)別了,如果剛好有類似的疑惑,不妨參照上述分析進(jìn)行理解。如果想知道更多相關(guān)知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道。
網(wǎng)站名稱:怎么在Dlib中使用OpenCV實(shí)現(xiàn)人臉識(shí)別-創(chuàng)新互聯(lián)
網(wǎng)址分享:http://www.rwnh.cn/article12/dscjdc.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供微信公眾號(hào)、網(wǎng)站營(yíng)銷、網(wǎng)站導(dǎo)航、自適應(yīng)網(wǎng)站、網(wǎng)站排名、品牌網(wǎng)站制作
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容