Data Science Salon Miami 2023 Experts’ Guide to Generative AI in the Enterprise

By Konrad Budek

From building balanced datasets and integrating human in the loop, speakers from the upcoming Data Science Salon Miami share their remarks and advice on implementing gen AI solutions in the enterprise.

Gen AI is clearly reshaping the business landscape on a grand scale. For instance, ChatGPT, launched in late November 2023, amassed 100 million users within its first two months, outpacing the user growth rates of platforms like TikTok (which took nine months) and Instagram (which took two and a half years).

Not only has the tool gained immense popularity, but it's also making a significant impact on the business sector. A McKinsey report  indicates that 79% of users have had some interaction with generative AI tools. Moreover, 22% of users claim to use these tools frequently, whether professionally or in their personal lives. This creates both the urge to use the new tools and a great set of new opportunities for early adopters.

Generative AI - experiences at a scale

With the ability to generate content on-the-fly, generative AI comes as a great opportunity for companies looking to bring better personalization to their services. According to Shopify, up to 73% of shoppers expect brands to understand their unique needs and expectations. The data gathered by the company shows also that 71% of consumers expect companies to deliver personalized interactions and 76% become frustrated when this doesn’t happen. 

Yifei Wang, Senior Machine Learning Engineer and a speaker at the upcoming Data Science Salon Miami has no doubts that Generative AI will bring better personalization to business services. 

“Recommendation systems can suggest content, products or services specific to the user's interest. Custom news articles, music or even design layouts can be generated and user data can be used to generate ads on-the-fly” she says. 

“In the production environment, real-time user profile and experience data can be used for ongoing inference to optimize the customer experience.  In the past, personalization followed an “inside-out” model where marketers would define a limited number of customer segments and define specific rules about which content and experience to direct to each user based on a set of predefined assumption.” comments Kevin Cochrane, CMO at VULTR and a speaker of the upcoming Data Science Salon Miami. “True 1:1 personalization that was responsive to each unique customers in practice has been something impossible to achieve.  The promise of GenAI is to evolve to a true “outside-in” model where the unique attributes of each customer and their journey can be leveraging to predict and deliver the next-best experience for each customer based on observations and learnings from all prior successful customer experiences to help fuel each new customer’s individual journey.” he adds. 

However, in this context, companies may have concerns about the reliability of the systems they use. Generative AI tends to generate inaccurate information or exhibit unpredictable behavior. Yet, this challenge is not insurmountable.

“The training data used is vital to an AI model's performance. The model must learn from a sufficiently large and diverse dataset that exposes it to a broad range of examples. Thoughtful curation of the training data can minimize biases and enhance the model's ability to produce accurate, high-quality outputs across different contexts.” says Roja Boina, Sr Software Engineer and Chapter Co-Lead at WIA, and a speaker at the upcoming Data Science Salon Miami. 

“Investing in robust and representative data will pay dividends in the capabilities of the resulting outputs of the AI system. To build trust and help users evaluate the reliability of its outputs, a generative AI system should be transparent about how it works. Explanations of the system's limitations and decision-making processes, rather than obscuring them, enable users to better understand the strengths and weaknesses of the AI. Increased transparency promotes the responsible use of technology.” she adds. The issue of building the right dataset is also highlighted by Yifei Wang. 

“Ensure that the training data is comprehensive, diverse, and representative of real-world scenarios. Regularly validate the model’s outputs against ground truth data. Deliver iterative improvement by continually refining the model based on feedback. And benchmark by comparing results against other models and state-of-the-art solutions” she advises.

There is also a strong need to incorporate human factor in training the solutions. 

“Our features have a human in the loop. We were also very agile in building and strategically distributing our release (rollout) to multiple stages- internal preview, external preview with early adopters, external preview with trusted adopters, etc. such that we could collect maximum feedback as we rolled out and built necessary gates or checkpoints ensuring accuracy and reliability” comments Sanghamitra Goswami, senior dir Data Science at Pager Duty and speaker of the upcoming Data Science Salon Miami.

“Much like traditional software engineering, model development and prompt engineering needs to have a rigorous QA process to test and validate accuracy and reliability of outputs.  These tests must go beyond traditional software engineering test harnesses to also inspect for biases and appropriateness of responses.” adds Kevin Cochrane. "This requires an ever-evolving library of cases based on real-world observations to build automated checks against known queries that produce poor results to tune models to ensure not only accuracy but also safety of responses.”

Lowering costs 

The benefits mentioned above can be mind-boggling for people used to do the business in the traditional way, heavy with manual processes. 

“Traditionally, companies have invested in building narrow datasets of readily collectable customer data that can be used on a small team of marketing specialists to pre-defined assumed segments, messages, and offers at a limited set of specific points on the customer journey.” comments Kevin Cochrane.  “The result:  high costs, limited effectiveness, and inability to truly tailor every aspect of the customer experience to each unique users. GenAI offers a different approach, which involves upfront investment in trained models and unlock the ability to every aspect of the customer experience to be tailored to each customer’s evolving intent as they interact with your brand.”

“Initial setup for generative AI can be expensive but might reduce costs in the long run by automating tasks.” says Yifei Wang. “But AI can process large datasets quickly, making it more efficient for certain tasks. Also, Generative AI can produce novel solutions and content that might be challenging or time-consuming for humans.”


Generative AI is considered a revolutionary technology, sometimes compared to pivotal technologies like the steam engine, iPhone or the Internet. Yet it is still a new field to explore for companies, with significant challenges to overcome. 

“Pursuing innovation solely for the spotlight can often result in flashy displays that contribute little to organizational value. Success in leveraging AI for cost efficiency, streamlined operations, and meaningful innovation comes to those companies with a strategic focus on the long-term rather than short-lived triumphs.” comments Sahab Aslam, Head of Enterprise Data Science & AI at Myriad Genetics and a speaker at the upcoming Data Science Salon Miami. 

Conferences like DSS MIA coming up on September 19th, offer a great opportunity to meet professionals and experts in the field, exchange experiences, and find inspiration. Featured topics this year will include Innovations in AI-Driven Data Visualization, Streamlined Data Management through NLP, Shaking Up the Financial Sector with AI, Monetizing Through Generative AI, LLMs in the Corporate Landscape, and many more. 

Register here

Get the latest data science news and resources every Friday right to your inbox!