![]() | ||
Embrace AI but be aware of the data pitfalls for the unwaryArtificial intelligence (AI) has made impressive strides over the past five years, with ChatGPT's mainstream success marking an inflection point. The internet is filled with images, videos, songs and articles created by generative AI. Algorithms in software and platforms we use every day, like Netflix, Apple or Google Maps, and Uber, help to shape the media we consume, the products we buy, and even the routes we take to work. ![]() Brian Civin, chief sales and marketing officer at AfriGIS Out of the public eye, companies are putting AI to work for applications as diverse as detecting fraud, generating code and reaching customers with personalised marketing. The tech and automotive industries are well advanced in trialling autonomous vehicles that use machine learning and sophisticated algorithms to safely navigate the streets. And this is just the beginning. As AI matures, we’re starting to see use cases where the technology stands in for humans and makes decisions on their behalf. A Snapchat influencer called Caryn Marjorie has created an AI voice bot version of herself to talk to her followers in real time. Meanwhile, Chinese tech company, NetDragon WebSoft, has ‘appointed’ an AI bot named Tang Yu as its CEO. The overlooked risks of accelerating AI adoptionThese examples show that AI has come a long way and that there are some compelling use cases for it in nearly every industry and business function. Yet there is also a danger that companies may overlook the risks of AI as they accelerate their adoption over the next few years. Though AI can support decision-making and critical thinking, it can’t completely replace human agency and judgement. As quickly as AI evolves and improves, it will never be perfect for the simple reason that it relies on algorithms and data that are fed to it by humans. Although AI systems can ‘learn’, they can’t completely overcome challenges such as incomplete or inaccurate data, or biased starting assumptions. This introduces a range of risks every company should be aware of as it ramps up use of AI to automate business processes and support decision-making. Let’s consider some of the issues:
Trust your own data, people and partnersThe upshot is that companies can’t automate their critical thinking or outsource data governance. Forward-thinking organisations will scope the risks that low-quality and inaccurate data poses to data-driven decision-making and AI processes in their businesses. This exercise should coordinate legal, business and technical skills, considering the veracity of data from multiple perspectives. As they roll out AI systems, companies should set out with clear standards about which data sources they will use to fuel AI systems, what the minimum requirements are for trusting data, who will control that data and who may use the data. It's important to know where and how the data was collected if one is to avoid using biased, incomplete or otherwise inaccurate datasets. Companies should generate and use their own in-house data or partner with a trusted entity to access accurate, reliable data to fuel AI algorithms. It’s also preferable to work in a closed environment where only the business and its close partners can access and work with the data. Organisations should put clear policies in place about how employees can use AI and how AI decisions should be explained and validated to further reduce risks. ConclusionWhile AI has made remarkable progress in recent years, it is essential for companies to be aware of the potential risks associated with its adoption. Trusting one's own data, people, and partners becomes crucial, along with establishing clear standards, data governance, and policies to mitigate risks. Striking the right balance between AI's capabilities and human judgment is key to unlocking business value in a responsible manner. About the authorBrian Civin is chief sales and marketing officer at AfriGIS.
| ||