BBVA API Market
Image: Tokyofoodcast (on Flickr)
Large and small companies have always used the information within their reach to better understand their business and try to get better results.
With the advent of computers and the ability to store data systematically, new possibilities opened up. Data was collected at first in order to improve transactional processes. With sales and purchase information suitably stored it was easy to automate billing and restocking processes, and to improve management of the supply chain… However, we soon discovered that this same data, when suitably cooked, could be a source of unbeatable value.
At the start 2 plus 2 made 4
Companies have made an extensive and pervasive use of carefully gathered information. Thus, we have been able to respond to increasingly sophisticated questions: how much is invoiced each month? How much does it cost to produce each item? What products or product families provide the biggest margins? What will be sold next quarter?
The above questions can in fact be answered using a little, very well structured data. Basically, sales and costs data (by time, area, product and customer), suitably cooked, has served to satiate the hunger for information felt by analysts, product managers and managers of the companies that today supply our fridges, closets and leisure spaces.
This basic, structured data will certainly continue to be part of every organization’s kitchen in the coming years.
New information everywhere
However, companies are capable of generating and capturing ever greater amounts of data. The social networks where we interact, the phones we carry around with us, the loyalty cards we use to get discounts, or the cars we take to be serviced when the warning light comes on, are all clear examples of processes that generate (and store) more information about us and our habits.
We know that it is useless to gather without “cooking” it. How will this new data be cooked? And what for?
The second question is easy to answer: the data will be processed for the same reason data has been processed in recent decades, i.e. to sell more, win market share, know the customer, etc. From the business (and technological) viewpoint the important question is the first one: How should we process this information? How should this new data be processed to extract useful knowledge of the customer, product, market, etc?
Big Data: A new challenge for a new reality
There is no single definition of what “big data” is, but it is normally accepted that “big data” refers to the accumulation of large amounts of data and to the procedures used to find repetitive patterns in this data (taken directly from the Spanish entry in Wikipedia). Technologically it is a huge challenge because it involves combining the use of tools that have grown separately (storage capacities, computing capacity, viewing capacity, prediction capacity, capacity to search and detect patterns, the cloud etc). All these disciplines have grown incredibly in recent years and explain why you can now read this article on a device with more computer processing and storage capacity than the spaceship took Neil Armstrong to the moon. The challenge now is to combine these tools to continue squeezing the new information available to us.
There are many differences between the traditional analytical use of information (OLAP analysis, scorecards, reporting, etc) and the new “big data”. It is true that we now work with much more, and more diverse, information (not so structured), but probably the biggest difference is in the questions we ask.
Before, we asked simple questions that were easy to translate into the SQL language used by databases (sales of such and such a product, during a certain period, customer to customer, etc). From now on, the questions will harbor greater difficulties and much more complex predictive and cognitive algorithms will be required to answer them. Now not only do we know the customer’s purchase history, but we can also know how this customer, or thousands more such customers, interact with our company in different ways (complaints, social networks, etc), and it is even possible to access their musical tastes or their opinions of a particular event.
The human brain as a model
In a way, “big data” tries to imitate human intuition, but systematically and on a grand scale. We try to guess what music the person sharing the elevator with us is listening to, based, for example, on the clothes he/she is wearing. ¿What can companies know about us based on all the information that, consciously or unconsciously, we give them? Will they know more about our interests and offer us more personalized products and services? Or will this represent an intolerable violation of our privacy? What impact will it have on social and community programs?
It is not easy to answer these questions for the same reason that it is not as easy as it seems to guess the music the person next to us is listening to based on the clothes he/she is wearing. Or is it?
An API is a very useful mechanism that connects two pieces of software equipment to exchange messages or data in a standard format such as XML or JSON. Thus, it becomes an instrument that can be used to search for revenue, open the doors to talent or innovate and automate processes.
APIs can be a great support when automating business processes Companies, often with a focus on SMEs, spend too many man-hours on time-consuming business processes, thereby making mistakes that a machine would never make. How can business process automation (BPA) help these companies? Is it possible to make use of APIs for BPA? What is […]
Ecommerce has continued to grow steadily in Spain, except during the pandemic, which has already been overcome in terms of online shopping. Ecommerce has been making inroads among the Spanish for over two decades. In 2000, it was a marginal and niche activity. Now it is almost universal. Almost all Spaniards with internet access shop online […]