Charlette is an AI Venture builder with a specialisation in emerging technologies. In 2018, she co-founded and led the development of the BACE API, an AI-powered remote identity verification system using facial recognition to combat online identity fraud and strengthen the digital identity system in Africa.
The Academy’s Digital and Physical Infrastructures team spoke to Charlette via video call about what a fair future with AI would look like and what is needed to get there.
What are your hopes for the development and deployment of generative AI?
The rise in popularity of ChatGPT has led to more people paying attention to AI, and we've seen a proliferation of generative AI tools. I hope this creates an opportunity for people to build AI models and find AI solutions that make a real societal impact.
I also hope that relevant stakeholders involved in the production of AI solutions, like innovators, use this opportunity to be conscious and responsible as they think about how generative AI can make a positive impact at a local and global level. We need to build solutions that can address challenges by leveraging the technology in the right way. Experience proves that when people build AI models like generative AI quickly, building for the sake of building, they are unlikely to spend a lot of time on things like the ethics surrounding data collection and data privacy. So, we should make sure that whatever we build, it's not just to fit into the AI trends, but to meet people’s needs while following certain principles.
The crucial questions are: ‘how can my research help find practical ways to solve local and global problems?’ and ‘what is the long-term impact of my AI research on society?’
Do you have any fears about the progress and implementation of generative AI?
We currently face a range of challenges, and one area of fear is around misinformation, which is amplified by AI-generated content that often includes inaccuracies. The alarming aspect is the rapid and widespread sharing of such content, as individuals often neglect to verify the credibility of the sources they encounter online. It's particularly tough to distinguish between true and false information when it comes to generative AI. Indeed, generative AI has the ability to generate highly convincing and coherent content that is difficult to differentiate from human-generated content.
As an AI venture builder, I also worry about accountability as we continue advancing this technology. For instance, there's growing criticism over biases in AI models, the lack of diverse data and data security, among other issues. I believe that we should also ask, who is accountable for that? Right now, AI regulations are still non-existent in many countries, and there is a tendency for innovators to avoid responsibility by hiding behind the system and deferring accountability.
At this juncture, what questions do we need to be asking ourselves to enable the safe and impactful use of AI?
In the world of AI, various stakeholders play significant roles, including innovators, regulators, and researchers, among others. Each of these groups must bring their attention to specific questions.
For instance, for researchers, the crucial questions are: “how can my research help find practical ways to solve local and global problems?” and “what is the long-term impact of my AI research on society?”
For innovators, the questions to ask are: "what specific problems am I addressing? How can I ensure that my AI model adheres to existing regulatory standards? Does my AI algorithm demonstrate the reasoning behind a given model's decision-making?”
Regulators need to recognise the importance of engaging with various stakeholders, especially those involved in the development of AI models, to enhance the synergy between regulations and innovation. It is crucial to ensure that AI regulations do not act as barriers for innovators, but instead establish a secure framework within which innovators can advance their AI solutions.
It is crucial to ensure that AI regulations do not act as barriers for innovators, but instead establish a secure framework within which innovators can advance their AI solutions.
What are the respective roles of researchers, those deploying the technologies, regulators, and policy makers in enabling safe use of generative AI?
As I mentioned earlier, all parties must work together to ensure the safe use of generative AI. Generative AI comes from long research efforts, and like any new technology, collaboration is vital once we understand its initial outcomes. The reality is Generative AI is new, and some experts are still struggling to control the spread of the use of generative AI models. This is why researchers must keep conducting rigorous studies to uncover potential risks, vulnerabilities, and biases. This will also help to discover new relevant patterns to consider in the production of generative AI.
People deploying generative AI, like entrepreneurs and businesses, have the opportunity to enhance existing models via the process of fine-tuning AI algorithms. They also provide these AI models to the market and create ways to test them and get user feedback. This is important for ensuring accuracy and tracking the impact of generative AI models.
Regulators and policymakers must work together to create laws for generative AI and make sure everyone follows them, which helps ensure safe use of generative AI.
What is not being talked about in the ongoing media discussion around generative AI?
As ongoing conversations about biases in data sets and AI models, especially those affecting the African market, continue, it remains essential for me to highlight this issue and encourage active engagement among AI stakeholders. They should ensure the inclusivity of generative AI models. I believe it is an important step to reduce worries about the impacts of generative AI tools.
Additionally, there's a lesser-discussed aspect related to the psychological impact of public uncertainty regarding generative AI. For example, due to the misinformation spread by AI tools, certain individuals find themselves in a state of confusion, unable to distinguish reality from fiction. Such confusion might potentially play a role in the emergence of mental health challenges, carrying enduring consequences, especially among younger age groups. There's also a noticeable societal pressure regarding the notion of Machines vs. Humans, and this state of comprehension is certainly far from being constructive. These circumstances require additional dialogues, proactive engagement, and tangible actions to effectively address the issues and educate people.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…