AI tools are seen by many as a boon for research, from work projects to school work to science. For example, instead of spending hours painstakingly examining web sites, you can just ask ChatGPT a question, and it will return a seemingly cogent answer. The question, though, is – can you trust those results? Experience shows that the answer is often “no.” AI only works well when humans are more involved, directing and supervising it, then vetting the results it produces against the real world. But with the fast growth of the generative AI sector and new tools constantly being released, it can be challenging for consumers to understand and embrace the role they must play when working with AI tools.
The AI sector is huge, and is only getting bigger, with experts stating that it will be worth over a trillion dollars by 2030. It should come as no surprise, then, that nearly every big tech company – from Apple to Amazon to IBM to Microsoft, and many others – is releasing its own version of AI technology, and especially advanced generative AI products.
Given such stakes, it also should come as no surprise that companies are working as fast as possible to release new features that will give them a leg up on the competition. It is, indeed, an arms race, with companies seeking to lock in as many users into their ecosystem as possible. Companies hope that features that allow users to utilize AI systems in the easiest way possible – such as being able to get all the information one needs for a research project by just asking a generative AI chatbot a question – will win them more customers, who will remain with the product or the brand as new features are added on a regular basis.
But sometimes, in their race to be first, companies release features that may not have been vetted properly, or whose limits are not well understood or defined. While companies have competed in the past for market share on many technologies and applications, it seems that the current arms race is leading more companies to release more “half-baked” products than ever – with the resultant half-baked results. Relying on such results for research purposes – whether business, personal, medical, academic, or others – could lead to undesired results, including reputation damage, business losses, or even risk to life.
AI mishaps have caused significant losses for several businesses. A company called iTutor was fined $365,000 in 2023, after its AI algorithm rejected dozens of job applicants because of their age. Real estate marketplace Zillow lost hundreds of millions of dollars in 2021 because of incorrect pricing predictions by its AI system. Users who relied on AI for medical advice have also been at risk. Chat GPT, for example, provided inaccurate information to users on the interaction between blood-pressure lowering medication verapamil and Paxlovid, Pfizer’s antiviral pill for Covid-19 – and whether a patient could take those drugs at the same time. Those relying on the system’s incorrect advice that there was no interaction between the two could find themselves at risk.
While those incidents made headlines, many other AI flubs don’t – but they can be just as lethal to careers and reputations. For example, a harried marketing manager looking for a shortcut to prepare reports might be tempted to use an AI tool to generate it – and if that tool presents information that is not correct, they may find themselves looking for another job. A student using ChatGPT to write a report – and whose professor is savvy enough to realize the source of that report – may be facing an F, possibly for the semester. And an attorney whose assistant uses AI tools for legal work, could find themselves fined or even disbarred if the case they present is skewed because of bad data.
Nearly all these situations can be prevented – if humans are directing the AI and have more transparency into the research loop. AI has to be seen as a partnership between human and machine.It’s a true collaboration—and that is its outstanding value.
While more powerful search, formatting, and analysis features are welcome, makers of AI products also need to include mechanisms that allow for this cooperation. Systems need to include fact-checking tools that will enable users to vet the results of reports from tools like ChatGPT, and let users see the original sources of specific data points or pieces of information. This will both produce superior research, and restore trust in ourselves; we can submit a report, or recommend a policy with confidence based on facts that we trust and understand.
Users also need to recognize and weigh what is at stake when relying on AI to produce research. They should weigh the level of tediousness with the importance of the outcome. For example, humans can probably afford to be less involved when using AI for a comparison of local restaurants. But when doing research that will inform high-value business decisions or the design of aircraft or medical equipment, for instance, users need to be more involved at each stage of the AI-driven research process. The more important the decision, the more important it is that humans are part of it. Research for relatively small decisions can probably be totally entrusted to AI.
AI is getting better all the time – even without human help. It’s possible, if not likely, that AI tools that are able to vet themselves emerge, checking their results against the real world in the same way a human will – either making the world a far better place, or destroying it. But AI tools may not reach that level as soon as many believe, if ever. This means that the human factor is still going to be essential in any research project. As good as AI tools are in discovering data and organizing information, they cannot be trusted to evaluate context and use that information in the way that we, as human beings, need it to be used. For the foreseeable future, it is important that researchers see AI tools for what they are; tools to help get the job done, rather than something that replaces humans and human brains on the job.
Credit: Source link