Fostering a strong corporate culture and how to…
Why is corporate culture fundamental to ESG (Environmental, Social, and Governance) compliance and how does it set the tone for ethical behavior and…
Being fully trustworthy is a very strong requirement. Apart from my wife and good friends, are there even other people I would “fully trust”? There are however plenty of other friends, family, colleagues, and software systems that I trust a lot, a trust which has been earned over time and only rarely been broken.
In this regard AI is not very different from other software systems, particularly those used in critical infrastructures. Trust in software is a complex topic - but simply put - a combination of good engineering, rigorous testing, and transparency. During development, any applicable best practice should be taken into consideration. The software should be tested during development and when in production. The developers should be transparent about limitations and challenges that the specific software might be subject to. Open-source software is very successful in many areas, thanks to its full transparency.
Most AI developers put a lot of care into designing responsible and well-tested models and are transparent about the data used to train them. This already exceeds the requirements we put towards many software systems based on classical algorithms that we use every day.
The upcoming regulations on AI put significant emphasis on design, testing and transparency and should support further trust in future AI systems.
We rightfully trust software in many areas of life, from building the devices we use every day, representing all our financial wealth, performing surgery in hospitals, or flying us safely to our vacation destination. There is no inherent reason why we should put less trust into AI-based software than into any other software system, if we ensure that the proper safeguards are set up.
Being a digital leader today is almost equivalent to being a leader in AI. They all put significant resources into AI development, have the required digital infrastructures, and strong strategies on how to integrate AI into many of their business lines.
When discussing the case of Amazon, there is a multi-faceted approach on how AI is supporting trust in online retail.
Customers put a lot of trust into the quality of product reviews. Weeding out proper reviews from those written by bots or those written in bad faith is a monumental task that has been supported by AI looking for anomalies and patterns over the last few years. However, the reviews themselves also might be already written by an AI - a classical cat & mouse game.
Cybersecurity and security of the third-party marketplace are other aspects, where AI is used to build more effective protection against theft of customer data and against attempts at fraud by sellers that try to abuse the provided platform.
Considering personalised recommendations, another big area for AI, there may be conflicting goals. A trusted independent advisor should suggest that we get the most affordable option that still fulfils all our requirements. An “online retailer AI” might consider products that are profitable, products where advertisement money was spent, or products from their own portfolio. Those AI systems therefore are trained to provide a balance between results that are beneficial for the customer and those that are beneficial for the retailer. The selection of sweets at our supermarket checkout implies similar considerations taken by human intelligence.
1. Not to think about challenges and technology fit before investing. There were several companies that hired data scientists and created AI teams without considering the business as a whole. AI is not for every company and the groundwork must be laid before the technology can be effectively used.
2. Focusing on moon shots, without thinking of the thousands of parts that go into a rocket. CEOs should not expect AI to transform their whole business and create significant new revenue, while disregarding all the efficiency and profitability gains that could be accomplished, by carefully applying it to any number of business processes.
3. Not considering the people dimension. In most cases, AI is not replacing humans, but will work together with them, increasing efficiency and quality of their work. It is paramount to involve employees early and consistently, to get feedback on the performance of the AI tool and find the best way on the integration into existing business processes.
4. CEOs should not only focus on internal development of AI tools that work on their companies’ data and are directly connected to their business processes. Is the company working in a multilingual context? Why not consider an external AI-based translation tool. Is it receiving a lot of paper documents? Maybe an OCR tool that creates PDF or Word documents would quickly improve the existing workflow.
The key message is to not only focus on AI knowledge, but to have a team that is able to integrate with the companies’ business.
Notably, the domain is developing rapidly. It does require an AI specialist that is able to identify the most suitable methodologies and tools that might support the business, this will not necessarily require a PhD unless the company is working on highly innovative topics. These specialists will also have the knowledge to avoid biases and create an AI model that is trustworthy and well-tested.
The technical foundation needs to be there. A data engineer helps you in connecting AI to your existing data infrastructure, or in connecting to external AI tools. A well-designed integration will significantly increase the potential usability of your AI systems.
To identify the right applications and processes for AI, a business expert that has been with the company for a while or has deep knowledge in the companies’ field should be a part of the team, to identify the most beneficial areas of implementation for AI, and to prioritise the implementation.
Ideally a change manager should be involved to bridge between the users of the AI systems and the AI team developing it. Human oversight will be required by the regulation - it will often also improve the performance of the AI. Early and continuous involvement will increase user satisfaction and performance of AI-augmented business processes.
1. Do your research and find inspiration. While every company is unique, there are bound to be similar organisations that have experimented with AI and might have shared their experiences. Check what the competition has done and what may or may not have worked for them.
2. The area of AI is growing very rapidly. Your strategy should not be set in stone but be flexible enough to prepare for disruptive innovation and new technologies that are coming from the research community.
3.Focus on responsible application of AI and a trustworthy implementation. Not only is it a good practice, but the upcoming EU AI Act will force this onto your AI applications, even those that you are acquiring from third parties.
4.Not every organisation needs an AI strategy. You need to think about your existing processes, and how they are suited for potential AI application. If there is nothing that would benefit from AI, there are other areas you should put your focus on.
5. Your employees remain your most valuable asset, even with a state-of-the-art AI platform. You should set up processes that empower them to develop their own ideas and innovation around AI. Train them on the potential of the technologies and the tools that you have adapted and listen to the suggestions they might give to you.
About the blog:
There is an urgent need for rapid transition to global sustainability. Business and industry have enormous social and environmental impacts. "Why does it matter?" is a bi-monthly blog that aims to elucidate this important topic through the eyes of our experts.
Don't miss out our experts' practical tips for your daily life and be part of the positive change.