If AI wasn’t already the belle of the tech ball, the advanced generative AI tools surfacing left and right have certainly secured its title. Organizations are understandably in a rush to get in on the action — not just for AI’s potential utility to their business, but also because, more and more, demonstrating use of AI feels like a marketing imperative for any business that wants to appear “cutting edge,” or even simply “with the times.”
Sometimes, rapid technology integrations can be a boon to the business. But other times, this kind of urgency can lead to poor, short-sighted decision-making around implementation. If the technology doesn’t actually solve a real problem — or sometimes even when it does — many don’t want to change their process and use it. All this to say: a bitter first taste of AI within an organization can also harm its chances of success the next time around, even if the strategy has improved.
At Grand Studio, we’ve had the privilege of working alongside major organizations taking their first high-stakes steps into AI. We know the positive impact the right kind of AI strategy can have on a business. But we’ve also seen the ways in which pressure to adopt AI can lead to rushed decision-making that leaves organizations worse off.
Our top-level advice to businesses looking to implement AI: don’t lose sight of human-centered design principles. AI may be among the most sophisticated tools we use, but it is still just that — a tool. As such, it must always operate in service of humans that use it.
A human lens on artificial intelligence
When implementing AI, it is tempting to start with the technology itself — what can the technology do exceptionally well? Where might its merits be of service to your organization? While these may be helpful brainstorming questions, no AI strategy is complete until it closely analyzes how AI’s merits would operate in conjunction with the humans you rely on, whether it be your employees or your customers.
CASE IN POINT
In our work supporting a major financial organization, we designed an AI-based tool for bond traders. Originally, they imagined using AI to tag particular bonds with certain characteristics, making them easier for the traders to pull up. It seemed like a great use of technology, and a service that would speed up and optimize the trader’s workflow. But once we got on the ground and started talking to traders, it turned out that pulling up bonds based on tags was not actually their biggest problem. AI may be a golden hammer, but the proposed project wasn’t a nail — it only looked like one from far away.
As we got more clarity on the true needs of these traders, we realized that what they actually needed was background information to help them make decisions around pricing the bonds. And they wanted the information displayed in a particular way that gave them not just a suggestion, but the data that led them there. In this way, they’d be able to incorporate their own expertise into the AI’s output.
If we had designed a product based on the original assumptions, it likely would have flopped. To be useful, the AI needed to be particularly configured to the humans at the center of the problem.
The linkage points between human and AI are crucial
We all know that bad blood among employees can spell doom for an organization. Mistrust and negative energy are surefire ways to sink a ship. In many ways, integrating AI can feel a lot like hiring on a slough of new employees. If your existing employees aren’t appropriately trained on what to expect and how to work with the new crowd, it can ruin even the best-laid plans.
Once you’ve identified where AI fits into your organization, we recommend paying extremely close attention to the linkage points between human and AI. Where must these parties cooperate? What trust needs to be built? What suspicion needs to be mitigated? How can each benefit the other in the best way possible?
CASE IN POINT
Recently, we worked with a financial services technology provider to develop AI that could spot fraud and inaccuracies in trading. We conducted in-depth research into the needs of the surveillance teams who’d be using the software to understand their role and also their expectations for how they’d use such a tool. This allowed us to thoughtfully build a visual interface on top of the AI that could maximally meet the surveillance team’s needs, including helping them with task management.
Taking the time to understand the precise nature of this potential human-AI collaboration helped us use resources wisely and prevent the mistrust and resistance that can cause even the best tools to fail.
AI integrations require trust and understanding
Your AI also can’t be a “black box.” While not everyone at your organization needs to be an expert on its functionality, simply dropping an unfamiliar tool into a work environment and expecting people to trust whatever it spits out is very likely misguided. This is especially true when AI is helping experts do their jobs better. These roles are defined by the deep training that goes into them — how are they supposed to give an open-arms welcome to a new “employee” whose training they can’t see or understand?
For example, a doctor trained in reviewing mammograms may well benefit from AI software that can review 500 scans and whittle it down to only 20 that need human assessment. But you can imagine a physician’s resistance to simply taking those 20 images without understanding how and why the software weeded out the other 480. They rely on their expertise to save lives, and need to trust that whatever tools are helping them are supported by similar training and values.
AI has the power to make big change. But if we don’t center humans in our implementations, the change we make may not be the good kind.
Contemplating your early steps into AI? We’d love to work with you to help make your leap into the future a smart one.