The biggest challenge to implementing AI in business: lack of quality data

AI For Business


The growth and adoption of AI is booming in the UK, with the market now worth over £16.8 billion and expected to reach £801.6 billion over the next decade. Around 15% of UK businesses are already using AI techniques such as data management and analytics, natural language processing, machine learning, and computer vision. And across the Atlantic in the US, AI is having a huge impact on the economy, with it expected to boost US GDP by a massive 21% by 2030.

The growth of any new technology is not without challenges. In the case of AI, these challenges include ensuring data privacy, addressing ethical concerns, and dealing with the complexities of integrating with existing IT infrastructure. Data quality is key to overcoming these challenges. To be useful, data used for AI must be of high quality, well-structured, and from trusted sources. These characteristics are the foundation of any AI model and determine its validity and reliability.

A recent ESG whitepaper on IT-related AI model training revealed that 31% of enterprises consider data quality limitations to be a major barrier to AI integration. The discussion here focuses on strategies to address this issue, highlighting the importance of collecting comprehensive, accurately labeled data at scale, and the critical role of human oversight (so-called “human-in-the-loop”) to ensure data integrity.

Whether AI deploys large language models like ChatGPT or machine learning (ML) techniques, its effectiveness depends heavily on the quality of the data on which the model is based. Poor data quality can lead to inaccurate outputs, undermining trust in the reliability and usefulness of AI systems and compromising outputs that rely on meaningful patterns to provide relevant and explainable insights.

Unfortunately, not all data is created equal. Traditional ML models that use contextual data from verifiable sources are considered more reliable in making decisions. In contrast, generative AI models get their data from a broader pool of unverified sources and are often inaccurate.

For example, when applying AI in the context of IT infrastructure, ML models trained on endpoint data can uncover hidden issues based on anomaly detection. AI can see incipient issues based on deviations from baseline patterns. This capability allows IT teams to operate at a more proactive level. In many cases, proactive intervention is possible thanks to predictive analytics. Predictive analytics is a key benefit of machine learning, using predictive analytics combined with sensors to alert you to IT issues before they have a significant impact on end users and the business as a whole, allowing for smarter remediation and reduced costs.

Integrating diverse data sources can be a challenge. For IT teams, regularly collecting data from multiple endpoints means better understanding technology assets, from hardware and software implementations to network performance. The accuracy of AI-driven recommendations increases with the amount of parameters used to fine-tune the model. A useful analogy that highlights the importance of accumulating large amounts of data to get clearer, more accurate results in AI applications is to think of data points as pixels in an image. The more pixels there are, the clearer the image.

Platforms like Lakeside SysTrack solve this roadblock by assessing each endpoint across the enterprise with over 1,200 sensors, collecting and analyzing massive amounts of endpoint data collected from 10,000 points every 15 seconds.

The granularity, breadth, history, and quality of the data collected contrasts with other industry players who provide less frequent data points, providing complete visibility across the entire IT estate. This holistic view improves AI models to enable IT support technicians and analysts to better identify users who may be experiencing device performance issues and therefore have a poor digital experience. Data-driven visibility reveals how to remediate IT issues, areas of poor performance in the environment, how your latest IT deployments are impacting users, and much more. Armed with insights from AI models, IT can pivot from reactive to proactive.

Take app performance as an example. Taking a snapshot of an app's CPU and memory usage alone does not provide the comprehensive data needed to effectively train an AI model. To thoroughly evaluate an app, additional metrics must be collected, such as its impact on network performance and GPU usage. Contextual information is equally important, with historical data revealing an app's normal performance parameters, how it functions with other apps, and how it interacts with the system's hardware and drivers. These variables are complex to understand due to their dynamic nature. But using ML and a robust data pool, IT teams can gain a detailed understanding of the complexities of app performance.

Whatever the output produced by the AI, human oversight is still important at this point, as models, especially those that use generative AI, can struggle to distinguish good data from bad. Because data is the food that powers AI models, ensuring that data is trustworthy and well-governed is essential, but humans still need to provide critical validation and direction.

Human oversight will become even more important in proactive IT management: ML, backed by extensive datasets, will greatly improve anomaly detection and prediction capabilities, but it is human input that will ensure insights based on unusual patterns and trends are actionable and relevant.

For example, natural language processing allows teams to efficiently manage large-scale queries across systems, such as analyzing average Microsoft Outlook usage or identifying employees who aren't using specific software licenses, which can result in unnecessary costs. While this AI integration streamlines operations, it still relies on humans to ensure interventions are appropriately tailored to each situation. Ultimately, AI can become a trusted “co-pilot” for an IT support agent or Level 3 systems engineer.

As organizations evolve toward proactive, predictive, and in the future, fully autonomous IT, prioritizing high-quality data is paramount. The better the data, the better the AI. Clearly, high-quality data and trust in AI go hand in hand. The underlying data not only determines the credibility, explainability, and relevance of AI outputs, it also ensures that users can trust these outputs.

A robust data strategy must go beyond mere collection. To protect AI applications from inaccuracies and biases and ensure smooth integration, organizations must commit to rigorous data acquisition, rigorous data management practices, and ensuring human oversight. These steps are essential to future-proofing business operations and establishing AI as a key asset in the transition to autonomous IT.

Chanel Chambers is vice president of product marketing at Lakeside Software..





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *