Human data isn’t just about customers but PEOPLE- employees, marketers, and suppliers. Behind every application and web browser is a person interacting directly or implicitly with another person, each of whom wants a reasonable balance of security and access over their data. Above all, human data is about respecting that data has become so important to people’s livelihood – their credit score just as much as their personalities- that it shouldn’t be treated differently than they would be treated.
One of the biggest data-related challenges organizations and companies face is that there is just too much of it. For many years, organizations have relied on GIS to manage, visualize and analyze their data. More recently, GIS technology has advanced to meet the demands of a higher velocity and volume of data. The addition of machine learning, AI, and real-time capability is accelerating that process. There is need to explore ways on how we use data and the data we use. The #DataFestKampala as organized by www.pollicy.org under the theme Living with Data generated a stream of chats kickstarting a conversation amongst different stakeholders around the topic.
Digital Innovations don’t happen in a blink of an eye. They sprout depending on human interactions and information – normally termed as data from our physical world. Every development, prototype, every newer and smarter innovation or capability is intimately connected to us, the end users- PEOPLE. That’s why it’s so paramount to always innovate with a strong inclusion of the people from the start.
Data mining of massive data sets is transforming the way we think about crisis response, marketing, entertainment, cybersecurity and national intelligence. Collection of documents, images, videos, and networks are being thought of not merely as bit strings to be stored, indexed, and retrieved, but as potential sources of discovery and knowledge, requiring sophisticated analysis techniques that go far beyond classical indexing and keyword counting, aiming to find relational and semantic interpretations of the phenomena underlying the data.
There is need to examine the frontier of data analysis whether in a static database or streaming through a system. Data at that scale- terabytes and petabytes is increasingly common in science (e.g., remote sensing, genomics), e-commerce, business analytics, national security, communications, and elsewhere. The tools that work to infer knowledge from data at smaller scales do not necessarily work, or work well, at such massive scale. New skills, tools, and approaches are necessary. More illustration on the cross-disciplinary knowledge – from computer science, statistics, machine learning, and application discipline that must be brought to bear to make useful inferences from massive data.
It was Albert Einstein who said, not everything that can be counted counts, and not everything that counts can be counted. Just because we can analyze mountains of data, it doesn’t necessarily mean that we will find anything useful or use it to create value once we do.
One critical fact gets overlooked; big data is only as useful as the human behind it. It’s the people who interrogate the data, make hypothesis, test conclusions, prototypes, algorithms and then determine the final direction that make big data succeed or fail.
Time and again, we see the results of big data unmediated by the human touch and common sense, whether we’re shopping online or trying to get agro-insurance for we the farmers off the www.m-omulimisa.com App. In the effort to create value companies instead create big data embarrassments. Human beings and human-oriented decisions must play a fundamental role in any big data strategy or companies and organizations risk alienating their clients and damaging their brands.