BBVA API Market
The API will provide developers with a software application that will enable them to create apps that recognize and understand objects. Google has developed the technology for the API, but it can be embedded in other applications as they are being built.
Cloud Vision can scan any component of an image and decide which parts are sufficiently important to be displayed. It is also capable of differentiating objects and detecting human faces and the person’s state of mind. The new technology can even read any text and translate it into several languages.
The API is also capable of detecting faces in videos and live broadcasts. What’s more, it can recognize people’s gestures. Are they smiling? Are their eyes open?
Developers will be able to use these technologies to create hands-free controls for games and applications, or to enable an application to react when a person smiles, for example. However, the technology only provides information about the position and movements that occur in a sequence.
How Cloud Vision can help
Since the API is capable of detecting human emotions, computers will know whether a person is happy or not when they use a specific application. This is very useful for developers because they can determine which functionalities users like best.
First of all the technology detects the reference points, and then it uses those points to analyze the complete face. In other words, it detects the face over and above the reference points.
This API can also help build a much cleaner – and therefore much better – Internet. Because of its capability to understand images and objects, it can flag the viewing of inappropriate content so that it can be reported. It can also help avoid violent situations or conflicts, wherever they may occur.
How to access the API
Developers interested in testing the API can download it from Android Studio, in the Android development environment. Examples of the technology can be downloaded through the Github platform, either by clicking the Download ZIP button or cloning it in the command line https://github.com/googlesamples/android-vision.git
Next, you have to import the project from Android Studio by going to the directory to which the downloaded repository of Vision of examples was saved. On executing the application, it should display the image of a face with tiny circles on the eyes, cheeks, nose and mouth.
Google Cloud Vision is being greeted with great satisfaction but with some concern as well regarding the privacy implications. This is because the API could be construed as having been developed by Google to capture every aspect about the world and analyze them to its own advantage.
The API is free if used on less than 1,000 images per day, but there is a monthly fee for any more than that.
The advantages of Cloud Vision are obvious, but will the technology affect applications? And will it impact our own behavior?
Click here for more information on APIs.
Follow us on @BBVAAPIMarket
Traditional banks are making the commitment to BaaS models, open banking is driving digital financial services, regulatory bodies are increasing scrutiny when it comes to BaaS providers, the banking ecosystem is rapidly changing and increased competition and regulatory pressures are expected in the BaaS sector. The State of Banking-as-a-Service (BaaS) is a report prepared by […]
APIs can be a great support when automating business processes Companies, often with a focus on SMEs, spend too many man-hours on time-consuming business processes, thereby making mistakes that a machine would never make. How can business process automation (BPA) help these companies? Is it possible to make use of APIs for BPA? What is […]