That’s right, face recognition technology has been hacked by a group of black car masters. Gao Xiaochu said that since then he began to think about the risks of face recognition technology in practical applications, and investigated the software that uses face recognition technology on the market, and the final result exceeded his expectations.
One person, one car,
one driver, the story has to start from an experience of taking a “black Uber”.
The car arrived, what is suspicious is that the information of the driver and car who picked me up is completely inconsistent with the information displayed on the mobile app, but in order to get home quickly, I got into the car without too much care, and the driver drove for less than a minute. , he turned around and said to me: “I’m going to cancel the Uber order, just give me the money later.” After my repeated refusal, the driver said that he could send me back to the place where I was and let me take a taxi again car back.
As a result, when I used Uber to get a taxi again, I found that the driver was still picking me up! The driver said, “You can either take a taxi and go back, as long as you still use Uber, you will still get my car!”
At that time I wondered, why is it still your car? why?
It turned out that there was a fleet of more than 30 black car drivers nearby. Each driver had a bunch of fake Uber accounts. The same person took orders for hundreds of accounts, and then dispatched vehicles to pick up people through the radio. Whichever number you call, I will be transferred to pick you up, and even if someone else picks you up, the process is the same.
So I wonder, Uber obviously added the face recognition function to the app in April this year, why can these drivers continue to use fake accounts? After a hard time, the driver finally revealed that although face recognition sounds awesome, they have software that can be easily cracked.
That’s right, face recognition technology has been hacked by a group of black car masters.
The above story is a description of a keynote speech on “The Risks of Face Recognition Technology Application” by Gao Xiaochu (Gao Tingyu), a security researcher from Ping An Technology, at the FIT 2017 Internet Security Innovation Conference hosted by Freebuf. After he finished speaking, he showed the software that the driver used to crack Uber’s face recognition technology – Photospeak, an app that allows photos to “speak with their mouths”.
Gao Xiaochu said that since then he began to think about the risks of face recognition technology in practical applications, and investigated the software that uses face recognition technology on the market, and the final result exceeded his expectations.
Fancy cracking face recognition technology
Through analysis, he found that most of the software on the market that uses face recognition technology, the recognition process is roughly as follows:
Detect faces → live detection → face comparison (with previously uploaded selfies or ID photos) → analyze and compare results → return results (pass or fail)
According to Lei Feng.com, the live detection technology requires users to blink, nod, and open their mouths during face recognition to prevent static images from cracking. Applications such as Alipay and Uber have adopted this technology.
Gao Xiaochu said that ordinary APP developers do not develop face recognition technology by themselves, but obtain face recognition functions through third-party API interfaces or SDK components. Every key point in the actual use process was analyzed, and finally, multiple breakthrough points were found in multiple links. As long as a little trick is used, face recognition can be rendered useless.
1. Injecting apps to bypass liveness detection
Gao Xiaochu first demonstrated on the spot to tamper with the program by injecting the application, thereby bypassing the so-called liveness detection function, and using a static photo to pass face recognition.
During the injection process, he first arranged a breakpoint in the program, triggered the breakpoint by continuously demonstrating the face recognition process, and then analyzed and modified the value stored in the program to achieve the ultimate effect of bypassing the live detection.
In addition to injecting the application,
he also found that he can tamper with the picture after the living body detection is completed by viewing the data structure of the current APP and modifying the input parameter dictionary, so as to achieve the effect that the living body detection can be completed by any person, so that he can also Take the photo of the victim to pass static face recognition, and then blink and look up to crack the liveness detection.
2. Video Attacks Bypass Liveness Detection
If the previous method requires some technical thresholds, then this method can be applied to all novice users, just install a Photospeak software, and then find a frontal photo of the other party in the circle of friends, personal space, etc. (this should not be difficult ), input into the software, you can make it speak, and the so-called liveness detection is self-defeating.
The black car driver mentioned at the beginning of the article also used this software to crack the face recognition function of the Uber client. From this Lei Feng.com conjectures that if the video synthesized by the software can crack the face recognition living body detection, then any clear frontal video should also be used to try to crack it. Red, live video, once face recognition is used as a password, it is equivalent to writing the password on the face.
3. 3D modeling bypasses cloud detection
Xiaochu Gao noticed that in addition to nodding and blinking, some face recognition would require users to nod, shake, and other actions, so he immediately thought of using 3D modeling to build a face model to crack.
Through a software called “FaceGen” and “CrazyTalk” downloaded from the Internet, with reference to the facial features in Aaron Kwok’s photos, Xiaochu Gao made the corresponding 3D modeling images in a short time, and the face detection software The comparison results show that the similarity between the model produced in a short time and the original photo is as high as 73.17% and 86.71% respectively, which can be used to crack general face recognition.
4. Face stencil bypasses cloud detection
Since 3D modeling can successfully bypass face recognition, Xiaochu Gao immediately thought of using 3D printing to try it, but it failed unexpectedly. When analyzing the reasons for the failure, he said:
Usually face recognition will analyze multiple feature values of the face, and some face recognition technologies will extract multiple feature points in the eyebrows, and if 3D printing is not precise enough, the printed face will generally lack eyebrow features.
If the 3D printing model uses only one material, the color of the printed face model will be too single. For example, the material is yellow and the eyebrows are also yellow, which will greatly reduce the success rate of recognition.
If the material used for 3D printing is not suitable, the face details of the printed model will be rough and require manual polishing in the later stage.
Although Gao Xiaochu did not say it directly at the scene, the editor of Lei Feng.com Zhaike Channel could vaguely understand the meaning of his words: “It’s not that 3D printing is not good, but the 3D printer I use is too spicy! A more sophisticated printer is also easy to crack.”
When he took out the 3D printed model he used for testing, he had to admit that the model was a bit appalling. Indeed, as Xiaoche Gao said, it lacked facial details, had a single color, and the proportions did not seem right.
5. Use of improper interface protection and various design flaws
Gao Xiaochu found that some apps did not sign the image data when uploading face images, so that the images could be intercepted and tampered with by tools, while others did not add timestamps to the data packets, which could be replayed by replaying the data. message to implement the cracking.
When testing an application, he found that the success of face recognition is determined by a threshold in the returned message, which is equivalent to the “passing score” in the test. If the face matching degree exceeds the threshold, the It can be passed. Unfortunately, the APP did not sign the return message, so that the message could be tampered with. Finally, Xiaochu Gao cracked its face recognition by lowering the threshold.
APP Risk Preliminary Investigation
Gao Xiaochu showed the results of his APP risk research on the spot. He found that in addition to general attendance and account security APPs, a large number of banks and P2P financial enterprise APPs have intervened to use face recognition technology, including:
When the financial industry uses face recognition technology, the security is significantly higher than that of general applications;
When face recognition technology involves critical business, the level of security protection is often higher
For example, when he was testing a domestic P2P financial client, after several unsuccessful attempts to unlock face recognition, the APP detected the possibility of malicious cracking and forced the use of bank card information, mobile phone text messages and other methods to complete the authentication. .
Gao Xiaochu emphasized at the scene that,
in addition to the application defects of face recognition technology on mobile phones, many problems were caused by the fact that developers did not strictly follow a safe Standardized, the access process is not rigorous enough, and even the practice of giving up security in order to improve user experience is often seen. This practice is very common in small companies with weak technical strength. The final result is that users are asked to write their passwords in on his face.