Organizations are making significant investments in leveraging artificial intelligence (AI) to solve business problems, but bringing this technology into production poses significant data quality and availability challenges.
According to Stephen Deasy, chief technology officer (CTO) at Confluent, this is especially true for applications that require real-time data, such as fraud detection.
Speaking to Computer Weekly on the sidelines of the Confluent Data Streaming World Tour in Melbourne, Deasy said the organization is moving beyond experiments and trials this year. They are taking what they have learned about AI and applying it to high-impact areas to deliver customer value, leveraging the latest streaming data to achieve their goals.
Greg Taylor, senior vice president and general manager for Asia Pacific at Confluent, added that there is a movement locally to build on existing systems of record and engagement to create “systems of action” that require newer data.
What’s needed, he said, is to be able to capture data where it is created, process it in real time, and use it to inform actions, such as automating parts of a business, for example. While some Confluent customers in the region have been able to achieve 60-70% automation, he said Australian organizations are generally not that far along.
Taylor said such systems would require business experts as part of the governance structure, for example to check for AI illusions, but would enable continuous improvement based on data.
Being able to react quickly to signals increases revenue and reduces fraud. And the ability to run the Confluent platform on-premises, in the cloud, or in a hybrid environment helps customers achieve those goals.
“AI is a top priority for our customers,” Deasy said, adding that Confluent helps companies feed real-time data directly into AI models.
He added that the company’s adherence to open standards and investments in technology and support resonate with customers, and the company often works with CTOs and system architects to help accelerate the implementation process.
As technology and performance continue to improve for a variety of workloads, Deasy said Confluent can keep up with what people are doing now and what they will do in the future.
Taylor observed that while customers have traditionally relied on vendors for new capabilities, the significant improvements in software engineering brought about by generative AI have potentially enabled them to create their own capabilities, thereby increasing their bargaining power.
“We see it regularly,” Deasy acknowledged, noting that this puts constant pressure on internal software engineering teams to produce more and faster.
The Melbourne event featured presentations from three major Australian companies and also focused on real-world developments.
bendigo bank
Sam Fursdon, principal AI engineer at Bendigo Bank, explained how by using the Confluent platform, including Confluent Flink, the bank was able to significantly reduce the mainframe load generated by the Open Banking Mandate and its mobile-only subsidiary, Up. Importantly, Confluent Flink seamlessly integrated with Bendigo’s continuous integration and continuous deployment (CI/CD) pattern to create a “well-orchestrated deployment pipeline.”
By combining transaction and balance data with Confluent, the bank reduced mainframe application programming interface (API) calls by 50%. The backlog of overnight batch processing has been cleared and is now completed by 6am.
Additionally, the average end-to-end latency from the time a transaction occurs until information is available is only 2.3 seconds during business hours. In practice, this means that ATM users can receive app notifications before cash is dispensed.
telstra
Telstra uses Confluent to improve its mobile network and customer experience. Telstra’s observability technology product owner Javed Bolim quotes a colleague as saying, “You have to see it to act.”
Events are continuously captured and streamed in real-time for analysis. This helps detect problems early, before customers notice them, and provides richer context by correlating more signals. The system’s extensibility also means that new uses can be implemented without redesigning data flows.
Data streams are filtered, enriched, and stored in a database for multiple uses. These include service guarantees at major events such as the Boxing Day Test match at the Melbourne Cricket Ground (MCG) and proof of value to ensure customers get what they pay for. Bolim added that new products and features enabled by this feature are currently in the pipeline.
Borim advised other IT professionals to focus on developing the right skills among team members and ensuring business buy-in when faced with difficult choices. We also recommended achieving interim goals and developing shared functionality, such as providing self-service access to other internal teams.
kohls
Coles, a supermarket chain, faced challenges arising from the use of dozens of disparate event-based systems in different departments of the organization. “Stuff was everywhere,” said chief engineer Simon Bedford. This sprawl has created duplication, additional costs, operational friction, and the need for security redesign.
Kohl’s has invested heavily in deploying Confluent as a true enterprise platform, incorporating tools, monitoring, observability, data products, and discoverability. Despite the scale of the implementation, it did not require a large team and relied on just three to five staff members at various stages of the project.
This effort provided an opportunity to start fresh and apply architectural principles such as consistent naming conventions, clear ownership boundaries, and automatic provisioning.
Governance needs to be embedded and automated, rather than manual, Bedford advised. He noted that the platform is being “treated like an internal software-as-a-service.” [SaaS]Developers play the role of customers. Developers typically avoid anything that seems overly restrictive, so it’s important to get the user experience right, and education has become a key part of the process. Ultimately, we achieved strong internal adoption by providing a strong developer experience that included self-service provisioning with GitOps and CI/CD integration.
“We have gone to great lengths to ensure that [observability and monitoring] It’s enterprise grade,” he added, noting that cost attribution is now done through telemetry.
As a result, ‘chaos and complexity’ has been replaced with ‘structure and efficiency’, increasing data discoverability and enabling high levels of reuse across the business.
“This platform has been easier and more reliable than other platforms,” Bedford said. It is trusted and widely used, delivering business outcomes such as faster time to market, lower integration costs, and increased customer responsiveness. “We have high-quality data. [and] We want to explore how it can be used for AI. ”
