Tech Takes on Generative AI
Since the debut of OpenAI’s ChatGPT, interest in generative AI has soared. Businesses across industries and sectors are exploring how generative AI can transform their operations, services, and products. But few industries are better positioned to capitalize on generative AI opportunities than technology.
Generative AI presents a unique opportunity for technology companies to hyper-personalize their products, monetize their data, and create frictionless customer experiences, among other innovative use cases. The more advanced generative AI becomes, the more it can enhance the value tech companies bring to their customers.
How Generative AI Transforms Tech
In the tech sector, industry leaders are exploring how generative AI can improve everything from streamlining code writing to the creation of marketing copy. As generative AI continues to develop, its use cases will expand, offering even more ways to drive value for tech companies. Below, we delve into four use cases that show how generative AI can transform tech companies.
Use Case 1:
Enhancing Product Offerings
A company that offers online educational classes is looking to increase customer retention. The company can use generative AI to create customized courses for each individual customer based on an initial intake form, with standard questions relating to their interests, job title, region, language, and learning style and preferences. For example, generative AI could automatically translate any course into the customer’s preferred language. The company could also use generative AI to create a custom curriculum based on individual learning style and goals. By hyper-personalizing the learning experience, the customer is receiving exactly what they want, which can improve their learning performance and overall customer satisfaction.
Use Case 2:
Expanding Product Functionality
A software company has an extremely comprehensive user guide but receives frequent complaints that the guide is too complicated and difficult to use. As a result, customers are forced to reach out to the help desk for answers to simple questions, compromising the customer experience and overwhelming the help desk. By adding a generative AI search function to the guide, the software company enables customers to search the guide using natural language. The generative AI can then respond to their query with a clear, comprehensive answer to their question. As a result, customers don’t have to spend time searching through an index and piecing together partial answers.
Use Case 3:
Monetizing Data
A SaaS company has access to a wealth of data and is interested in monetizing it to expand its revenue streams. The problem is that the only entities who would be interested in purchasing access to the data set are their direct competitors, which presents an obvious conflict of interest. By building a generative AI platform based on their customers’ data, however, the SaaS company can extract and sell market insights from their customer base. They can also explore selling access to the platform to anyone who markets to the same customer base, allowing them to monetize their data without ever actually having to relinquish it.
Use Case 4:
Improving Customer Service
A hardware company with limited customer service resources needs to address customer complaints and questions quickly at all hours of the day and night. By adding a generative AI chatbot to their website, the company can respond to customers in real time. The chatbot can also generate responses in the customer’s native language, reducing the risk of miscommunications. If the chatbot is sufficiently advanced, customers may not even be able to distinguish it from a real person. If the chatbot can’t address a customer’s issue, it can direct the customer through the proper channels to receive human attention. Streamlining the issue-handling process will ultimately lead to better customer experiences and satisfaction.
Addressing Generative AI Risks
Tech companies are adopting generative AI with great enthusiasm. However, success isn’t a guarantee. Maximizing ROI from generative AI is contingent on proper planning and execution. Failing to create a solid foundation for generative AI deployment can leave tech companies vulnerable to AI bias, hallucinations, data breaches, and data poisoning. Fortunately, there are steps tech companies can take to mitigate these risks.
1. AI Bias
AI bias occurs when the AI model is trained based on a data set or processes that reflect human or systemic prejudice. For example, generative AI platforms that generate images frequently fall victim to gender and racial biases when fulfilling user requests, which leads to distorted outputs. When searching terms like “CEO,” generative AI overwhelmingly favors images of white men, whereas terms associated with low-paying jobs like “fast food” generate images of women and people of color.
To reduce the risk of AI bias, tech companies need to ensure they have the right data set for their model. The training set should be sufficiently diverse to ensure accurate representation of different demographics while avoiding overrepresentation, which is a common problem in large data sets. Tech companies should also ensure they select the right model for their AI and set it up correctly. Look for models that offer algorithmic transparency.
Tech companies also need to consider the questions or use cases that could be a bias concern and implement strong governance to avoid these issues. For example, a company might restrict certain topics if concern related to biased results is significant.
2. Hallucinations
A hallucination occurs when a generative AI program returns a response that is factually inaccurate and/or not supported by its training data. Hallucinations are particularly challenging to detect because the platform presents them as facts. Since the user does not necessarily see the sources that are used to generate the answer, it can be difficult to distinguish facts from hallucinations. Even if sources are cited, the sources themselves may be fake.
In order to prevent hallucinations, tech companies need to adopt the right validation procedures to check the generative AI platform’s outputs. For example, a company may ask an expert in a field related to the request to check the output. The company can also design the platform to include sources, allowing users to confirm the sources are real and support the platform’s output.
Next, tech companies need to train their users on how to properly use the platform. A policy on acceptable and unacceptable use is foundational to good generative AI governance. In addition, training on prompt engineering can significantly reduce the risk of hallucinations. Best practices like being as specific as possible and providing the AI with relevant details can help users create prompts that produce comprehensive and accurate results. Asking the AI to include sources — and then validating any sources cited — is another best practice.
3. Data Breaches
A data breach occurs when an unauthorized party, such as a hacker, obtains access to private or confidential data. Many generative AI platforms train themselves with data manually input by users. If the data from that application becomes exposed, all user data could be at risk. Data breaches are especially problematic for companies that are putting proprietary information into generative AI platforms. Companies should be aware that their employees may have already put proprietary data into a generative AI platform.
It’s critical that tech leaders understand the security implications of the platforms in use. If the organization opts to rely on third-party platforms instead of building its own, they need to know how the data is being used and how long it’s being kept. It’s important to note that not all platforms leverage user data for training purposes — BDO’s In-House GPT platform, for example, does not. In-house GPT platforms use encrypted data as opposed to open-source data, which can mitigate the risk of a data breach.
As with any new technology, tech companies also need to monitor regulatory changes to stay compliant. They should also keep an eye on the regulatory landscape to determine what compliance requirements they may need to address in the future so they can start preparing today.
4. Data Poisoning
Data poisoning occurs when a bad actor accesses a training set and “poisons” the data by injecting false data or tampering with existing data. Data poisoning can cause the model to give inaccurate results. It can also allow bad actors to build a backdoor to the model so they can continue to manipulate it when and how they like.
Because generative AI platforms are based on massive amounts of data, it can be extremely difficult to determine if or when data poisoning has occurred. Tech companies should be extremely selective about the data they use. Open-source data, while very useful for training AI models, can be more vulnerable to data poisoning attempts. Regular data audits can also help protect against data poisoning.
Moving Forward: Other Key Considerations for Tech Companies
As tech companies move forward with generative AI, there are a few other important considerations to keep top of mind:
- Best Practices. Due to the strong interest in generative AI, many best practices have already been established. Look for best practices related to each relevant use case for generative AI. For example, a company planning to use generative AI to write code can explore the best ways to reduce orphan code.
- Employee Adoption. Many employees might initially feel uncomfortable or wary about using generative AI. Encouraging them to use generative AI in their personal lives will make it easier for them to eventually transition into using it for professional purposes. Training employees in prompt engineering is also crucial to increasing their comfort level and success with the technology, generating the best possible results.
- Department Impacts. Consider how generative AI could be used in each department and how that would impact the department’s operations, resourcing needs, and profitability. These impacts should help determine when and how to deploy generative AI within the company. The company may want to explore its first pilot in a department that would greatly benefit from the technology and would be exposed to the least amount of risk.
- App Use. Tech companies may want to explore creating one or more AI-enabled apps. These apps can be created for customer use — for example, an app that complements and enhances the company’s product — or for employee use — for example, workflow management apps. Before creating an AI-enabled app, companies should understand what value the app will bring to its intended audience and what resources will be required to maintain it.
- Resource Use. Generative AI isn’t a one-and-done adoption exercise. Once generative AI is adopted, it requires ongoing maintenance, support, reviews, and documentation. The company should have a clear picture of the resources it will need to maintain the AI platform or tool once it’s created and how maintenance will impact regular business operations.
- Emerging Tech. Tech companies need to be prepared not just for what’s happening today, but for what’s ahead. For example, LangChain is an emerging technology that makes it simpler to create applications using large language models. LangChain is likely to present significant opportunities for companies in the future, especially related to areas like chatbots and document analysis. As generative AI continues to evolve and new technologies emerge to support and complement it, tech companies should reassess their innovation investments and plan accordingly.
Ready to adopt generative AI in your company? Take the first step by learning how to securely enhance your overall AI maturity.
SHARE