BBVA API Market
The API will provide developers with a software application that will enable them to create apps that recognize and understand objects. Google has developed the technology for the API, but it can be embedded in other applications as they are being built.
Cloud Vision can scan any component of an image and decide which parts are sufficiently important to be displayed. It is also capable of differentiating objects and detecting human faces and the person’s state of mind. The new technology can even read any text and translate it into several languages.
The API is also capable of detecting faces in videos and live broadcasts. What’s more, it can recognize people’s gestures. Are they smiling? Are their eyes open?
Developers will be able to use these technologies to create hands-free controls for games and applications, or to enable an application to react when a person smiles, for example. However, the technology only provides information about the position and movements that occur in a sequence.
How Cloud Vision can help
Since the API is capable of detecting human emotions, computers will know whether a person is happy or not when they use a specific application. This is very useful for developers because they can determine which functionalities users like best.
First of all the technology detects the reference points, and then it uses those points to analyze the complete face. In other words, it detects the face over and above the reference points.
This API can also help build a much cleaner – and therefore much better – Internet. Because of its capability to understand images and objects, it can flag the viewing of inappropriate content so that it can be reported. It can also help avoid violent situations or conflicts, wherever they may occur.
How to access the API
Developers interested in testing the API can download it from Android Studio, in the Android development environment. Examples of the technology can be downloaded through the Github platform, either by clicking the Download ZIP button or cloning it in the command line https://github.com/googlesamples/android-vision.git
Next, you have to import the project from Android Studio by going to the directory to which the downloaded repository of Vision of examples was saved. On executing the application, it should display the image of a face with tiny circles on the eyes, cheeks, nose and mouth.
Google Cloud Vision is being greeted with great satisfaction but with some concern as well regarding the privacy implications. This is because the API could be construed as having been developed by Google to capture every aspect about the world and analyze them to its own advantage.
The API is free if used on less than 1,000 images per day, but there is a monthly fee for any more than that.
The advantages of Cloud Vision are obvious, but will the technology affect applications? And will it impact our own behavior?
Click here for more information on APIs.
Follow us on @BBVAAPIMarket
Spain is in the midst of a banking transformation toward open banking thanks to the regulatory impulse at European level. However, adoption among bank customers is still too slow, which will have consequences in the future. In Spain, the digital transformation of banking, rather than a future promise, is already a reality. This new financial […]
Various case studies are used to show how open finance enables the financial inclusion of SMEs and the economic growth of developing regions.