Millennium Magazine 18th_Ed_Pooja Chawla

UNDERSTANDING ITS LIMITATIONS 3. Gen-AI Isn’t Free ChatGPT has a free version, so many assume gen-AI is always free. It’s not. ChatGPT processes an estimated $700,000 worth of queries daily, which OpenAI eats as a marketing expense to become virtually synonymous with generative AI technology. Likewise, paid gen-AI tools like GPT-4.5 appear free to the end user. Your employees might think their prompts are answered for free, but your organization will get a substantial bill. Gen-AI is monetized through tokens. Each token generally costs less than one cent, but queries comprise far more tokens than you expect. For example, the word “hamburger” is three tokens: ham, bur, and ger. Choosing a language other than English dramatically increases token usage too. “Como estas” (“how are you” in Spanish) consists of 10 total characters but costs five tokens for a gen-AI tool to process. The length of the inputs, outputs and request parameters also affects how many tokens are needed. Your workforce should understand the cost of every query so time and money aren’t wasted. Phrasing queries correctly on the first try is a great first step. Stakeholders and workers should also understand the capabilities of their genAI tools so they don’t waste tokens on tasks the tools cannot complete. Focusing on Gen-AI Shortcomings Isn’t Pessimistic Generative AI gets tons of media hype as a cutting-edge technology, so focusing on its shortcomings may seem shortsighted. However, your organization will get more out of these tools if you and your team understand how it works and what it can do. A working knowledge of gen-AI’s limitations can give you a competitive advantage. and relatively benign, such as mixing up proper nouns in an AI-generated story. Others can prove more costly, such as citing fictitious cases in legal research or fudging sales data. Most gen-AI vendors downplay the existence of hallucinations, so many people mistakenly assume that anything an AI produces is automatically right. In truth, the rate of hallucinations is estimated to be anywhere from 2% to 5%. Similarly, many organizations believe that gen-AI trained on internal materials are hallucination-proof. This is false. If your organization has any external links to YouTube videos or informational websites, your “bubble” isn’t completely private. If you avoid exterior links, current technology will still occasionally produce errors. There is no way to prevent hallucinations entirely, so your team needs to understand what they are. Any inaccurate information should be labeled as erroneous so the AI doesn’t assume it was right, and stakeholders should ensure consistent human fact-checking to discover hallucinations before they cause a problem for your organization. 2. Prompts Require Care All gen-AI tools require human input through prompts. Writing prompts seems simple, but ineffective prompts are a great way to get inaccurate results. For instance, you want to use AI to help your software company produce customer training materials. You’ll need a learning management system (LMS) to deliver your content, so you ask, “What should I look for in an LMS?” From your perspective, you’ve asked for exactly what you want. However, the gen-AI tool doesn’t know you’re looking for customer training tools. Many LMS vendors stress their employee training capabilities, so your response may be geared toward employee training. Alternatively, you might get an answer geared toward a traditional educational setting with teachers and students. Either way, the tool failed. Providing too much information is another common problem. Most gen-AI tools struggle with long queries, increasing the risk of hallucinations or error messages. They may also assume you want them to evaluate what you said rather than complete the task, wasting time. The best practices vary from tool to tool. You’ll need to train team members to phrase queries so the algorithm produces the right type of response.

RkJQdWJsaXNoZXIy MTQ5NDA2