Artificial intelligence (AI) is seemingly everywhere. As AI models like ChatGPT are experiencing a meteoric rise in popularity, calls from critics and regulators have circulated the airwaves to do something about the potential threats that AI poses. Understandably, this has created a debate about whether the merits of AI outweigh its risks.
In recent months, the U.S. Federal Trade Commission has issued several statements on AI programs. These culminated in a statement made in April 2023 in conjunction with the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission to support “responsible innovation in automated systems.”
Figure 1 It’s about time to weigh the ethical side of AI technology. Source: White Knight Labs
Why FTC is beginning to explore AI
Cybersecurity expert Greg Hatcher, co-founder of White Knight Labs, says there are three main areas about which the FTC is concerned: inaccuracy, bias, and discrimination. He adds that there is good reason for them to be worried. “Time has shown that models can be accidentally trained to discriminate based on ethnicity, and the vast majority of AI developers are white men, which leads to homogeneous perspectives,” he explains.
However, according to cloud computing guru Michael Gibbs, founder and CEO of Go Cloud Careers, this bias is not inherent to AI systems, but a direct result of the biases instilled in them by their creators. “Artificial intelligence is not inherently biased—AI can become biased based on the way it is trained,” Gibbs explains. “The key is to use unbiased information when developing custom AI systems. Companies can easily avoid bias with AI by training their models with unbiased information.”
Executive coach and business consultant Banu Kellner has helped numerous organizations responsibly integrate AI solutions into their operations. She points to the frenzy around AI as a major reason behind many of these shortcomings.
“The crazy pace of competition can mean ethics get overshadowed by the rush to innovate,” Kellner explains. “With the whole “gold rush” atmosphere, thoughtfulness sometimes loses out to speed. Oversight helps put on the brakes, so we don’t end up in a race to the bottom.”
Responsible implementation of AI
Kellner says the biggest challenge business leaders face when adopting AI technology is finding the balance between their vision as a leader and the increased efficiency that AI can offer to their operations. “True leadership is about crafting a vision and engaging other people toward that vision,” she says. “As humans, we must assume the role of the architects in shaping the vision and values for our emerging future. By doing so, AI and other technologies can serve as invaluable tools that empower humanity to reach new heights, rather than reducing us to mere playthings of rapidly evolving AI.”
As a leading cybersecurity consultant, Hatcher finds himself most interested in the influence AI will have on data privacy. After all, proponents of artificial intelligence have hailed AI’s ability to process data at a level once thought impossible. Additionally, the training process to improve the performance of these models depends also on the input of large amounts of data. Hatcher explains that this level of data processing could lead to what’s known as “dark patterns,” or deceptive and misleading user interfaces.
Figure 2 AI can potentially enable dark patterns and misleading user interfaces. Source: White Knight Labs
“Improving AI tools’ accuracy and performance can lead to more invasive forms of surveillance,” he explains. “You know those unwanted advertisements that pop up in your browser after you shopped for a new pink unicorn bike for your kid last week? AI will facilitate those transactions and make them smoother and less noticeable. This is moving into ‘dark pattern’ territory—the exact behavior that the FTC regulates.”
Kellner also warns of unintended consequences that AI may have if our organizations and processes become so dependent on the technology that it begins to influence our decision-making. “Both individuals and organizations could become increasingly dependent on AI for handling complex tasks, which could result in diminished skills, expertise, and a passive acceptance of AI-generated recommendations,” she says. “This growing dependence has the potential to cultivate a culture of complacency, where users neglect to scrutinize the validity or ethical implications of AI-driven decisions, thereby diminishing the importance of human intuition, empathy, and moral judgment.”
Solving challenges posed by AI
As for the solution to these consequences of AI implementation, Hatcher suggests there are several measures the FTC could take to enforce the responsible use of the technology.
“The FTC needs to be proactive and lean forward on AI’s influence on data privacy by creating stricter data protection regulations for the collection, storage, and usage of personal data when employing AI in cybersecurity solutions,” Hatcher asserts. “The FTC may expect companies to implement advanced data security measures, which could include encryption, multi-factor authentication, secure data sharing protocols, and robust access controls to protect sensitive information.”
Beyond that, the FTC may require developers of AI programs and companies implementing them to be more proactive about their data security. “The FTC should also encourage AI developers to prioritize transparency and explainability in AI algorithms used for cybersecurity purposes,” Hatcher adds. “Finally, the FTC may require companies to conduct third-party audits and assessments of their AI-driven cybersecurity systems to verify compliance with data privacy and security standards. These audits can help identify vulnerabilities and ensure best practices are followed.”
For Kellner, the solution lies more in the synergy that must be found between the capabilities of human employees and their AI tools. “If we just think in terms of replacing humans with AI because it’s easier, cheaper, faster, we may end up shooting ourselves in the foot,” she warns. “My take is that organizations and individuals need to get clear on the essential human elements they want to preserve, then figure out how AI could thoughtfully enhance those, not eliminate them. The goal is complementing each other—having AI amplify our strengths while we retain duties needing a human touch.”
Figure 3 There needs to be a greater synergy between the capabilities of human employees and their AI tools. Source: White Knight Labs
An application of AI in which a perfect example of this balance can be found is in personal finance. The finance app Eyeballs Financial uses AI in its financial advisory services. However, the app’s founder and CEO Mitchell Morrison emphasizes that the AI does not offer financial advice itself. Instead, AI is used as a supplement to a real-life financial advisor.
“If a client asks a question like ‘Should I sell my Disney stock?’, the app’s response will be, ‘Eyeballs does not give financial advice,’ and the message will be forwarded to their advisor,” Morrison explains. “The Eyeballs Financial app does not provide or suggest any form of financial advice. Instead, it offers clients a comprehensive overview of their investment performance and promptly answers questions based on their latest customer statement. The app is voice-activated and available 24/7 in real-time, ensuring clients can access financial information anytime, anywhere.”
The use case of Eyeballs is a perfect example of how human involvement is necessary to check the power of AI. Business leaders must remember that AI technologies are still in their infancy. As these models are still developing and learning, it’s essential to remember that they are imperfect and bound to make mistakes. Thus, humans must remain involved to prevent any mistakes from having catastrophic consequences.
Although we cannot discount the tremendous potential that AI models offer to make work more efficient in virtually every industry, business leaders must be responsible for its implementation. The consequences of AI being implemented irresponsibly could be more harmful than the benefits it would bring.
The debate about artificial intelligence is best summarized in a question rhetorically asked by Kellner: “Are we trying to empower ourselves or create a god to govern us?” So long as AI is implemented with responsible practices, businesses can stay firmly in the former category, and minimize the risk of falling victim to the latter.
John Stigerwalt is co-founder of White Knight Labs.