The Convergence 2024 conference explores practical applications of LLMs and similar technologies in everyday business operations and supporting business models. This year's lineup includes speakers and experts from top tech companies like Google, Microsoft, X/Twitter, and PayPal.
The event aims to share the latest breakthroughs and challenges in AI and ML fields. Sessions are designed to provide insights and knowledge, covering not only technical aspects but also AI governance and machine learning security.
YOLO analytics in action
Glenn Jocher, Founder and CEO of Ultralytics, will kick off the keynote session by sharing his insights into YOLO analytics, a cutting-edge tool for object detection in action. The expert shared his experiences with Ultralytics YOLO (You Only Look Once) model leads the way in object detection.
This session explores the intricacies of YOLO, showcasing its latest features and seamless integration across platforms for real-time, precise, and efficient detection. Attendees will learn about practical uses, optimization techniques for diverse environments, and future advancements in YOLO technology.
“Despite its potential, object detection faces several key challenges that need to be addressed. One of these can be accuracy and robustness, particularly in complex environments with varying lighting conditions, occlusions, and object scales” says Glenn Jocher, Founder and CEO of Ultralytics. “Improving the ability of object detection models to generalize across diverse scenarios is crucial for real-world applications. The root of the generalization and complexity issues is the task of constructing and maintaining a comprehensive and representative dataset” he adds.
LLMs and academic learning
The following session will be run by Sanghamitra Deb, AI and ML leader at Chegg, who will talk about the Developing Conversational AI Agents to Enhance Academic Learning.
Chegg’s personalized chat feature, powered by LLM agents, allows students to not only ask questions but also receive detailed explanations, resembling a tutor's guidance. Behind the scenes, these agents draw from a decade of data to tailor learning sessions to each student.
The experts will discuss how LLMs face scalability issues, often producing varying results for identical prompts. To ensure reliability, they use prompt and model versioning and closely monitor models during production.
“Let’s say the prompt is “Explain gravity?”. The content of the explanation might remain the same most of the time but the language used will have variations. Now if you write a prompt Giving a personality such as an empathetic tutor, specifying the level of education, and Provide relevant content using RAG’s and combine it with specific instructions on how to answer the question, the variation in the output will be much less” says Sanghramitra Deb, AI and ML Leader in Chegg. “When creating LLM-assisted applications, it's important to explore if smaller ML models for tasks like classification can reduce the strain on Generative models, making them more scalable” she adds.
Decoding LLMs: Challenges in evaluation
Jayeeta Putatunda from Fitch Ratings will talk about how Large Language Models (LLMs) have transformed natural language processing, including conversational AI and content creation. But evaluating their performance is challenging due to the lack of standardized benchmarks and the complexity of understanding their decision-making processes and biases.
In her presentation, the expert tackles fundamental questions about effective evaluation metrics for LLMs and their alignment with real-world applications. Given the dynamic growth and evolving architectures in the LLM field, continuous evaluation methodologies are essential to adapt to changing contexts.
Evaluating LLMs
Debasmita Das, a Data Science Manager at Mastercard, will discuss the unique challenges of evaluating Large Language Models (LLMs) because of their generative nature and lack of clear ground truth data. She will cover various aspects of LLM evaluation, including qualitative analysis by humans, quantitative metrics such as perplexity and diversity scores, and domain-specific assessments through downstream tasks. Additionally, the talk will stress the importance of benchmark datasets, reproducibility, and standardized evaluation protocols to enable fair comparisons and advancements in LLM research, including using LLMs to oversee other LLMs.
“Using LLMs to oversee other LLMs is a concept filled with potentials and challenges. Conceptually, it's feasible to use one LLM to monitor and improve the outputs of another. For instance, an LLM dedicated to providing historical information might generate content that includes both accurate facts and inaccurate "hallucinations" such as fictitious events or entities. An overseeing LLM, specially trained to recognize historical accuracy, could verify such content against a trusted factual database and flag any discrepancies, such as the erroneous mention of a conflict involving a fictional country” she comments.
Elevating ML Workflows: Harnessing the Power of Our MLOps Platform in an Audience Delivery Company
Jivitesh Poojary, Lead ML Engineer from Comcast shares how today's audience delivery landscape demands seamless integration of machine learning workflows for success. The participants will learn how our MLOps platform revolutionizes audience targeting, enhancing efficiency, accuracy, and agility at scale.
Real-Time RAG in LLM Applications
Ankit Virmani, Senior Cloud Data Architect/Field CTO at Google shares his remarks on updating the vector DB with streaming pipelines and the significance of RAGs in preventing hallucinations, which greatly affect LLM outputs. It's ideal for data and ML engineers seeking a deep dive into fine-tuning LLMs using open-source libraries.
Optimizing Sentence Transformers for Entity Resolution at Scale
Melanie Riley, Alec Stashevsky, and Peter Campbell will discuss their experiences with ML at Fetch. Fetch rewards users for uploading receipt images, a process that occurs over 11 million times each day. The ML and engineering teams focus on accurately and quickly extracting, normalizing, and improving receipt information. Entity resolution, a key step, links records across different data sources, especially as paper receipts vary in representing entities like retailer names and product descriptions.
The talk will cover the journey from conception to deployment, explaining how popular sentence transformers and nearest neighbor algorithms were customized for our receipt data. We optimized models for real-time usage and deployed them to serve our 18 million monthly active users. Currently, our system processes over 11 million receipts daily, and we track model experiments using Comet ML.
Discussion panels
The conference will include discussion panels where experts will share their remarks and thoughts on the most important issues in the field of Machine Learning and Artificial Intelligence.
This year’s edition will be held virtually on May 8-9, with a dedicated networking reception where both participants and speakers can engage in more direct conversations.
If you are interested in participating in this event, click here to register.