Generative AI 101
Welcome to Generative AI 101, your go-to podcast for learning the basics of generative artificial intelligence in easy-to-understand, bite-sized episodes. Join host Emily Laird, AI Integration Technologist and AI lecturer, to explore key concepts, applications, and ethical considerations, making AI accessible for everyone.
Episodes
![Natural Language Processing (NLP) Concepts](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Monday Jul 01, 2024
Natural Language Processing (NLP) Concepts
Monday Jul 01, 2024
Monday Jul 01, 2024
In this episode of Generative AI 101, we break down the fundamental concepts of Natural Language Processing (NLP). Imagine trying to read a book that's one long, unbroken string of text—impossible, right? That’s where tokenization comes in, breaking text into manageable chunks. We’ll also cover stemming and lemmatization, techniques for reducing words to their root forms, and explain the importance of stop words—the linguistic background noise. Finally, we’ll explore Named Entity Recognition (NER), which identifies key names and places in text. These basics form the foundation of NLP, making our interactions with technology smoother and more intuitive.
Connect with Emily Laird on LinkedIn
![The History of Natural Language Processing (NLP)](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Friday Jun 28, 2024
The History of Natural Language Processing (NLP)
Friday Jun 28, 2024
Friday Jun 28, 2024
In this episode of Generative AI 101, we journey through the captivating history of Natural Language Processing (NLP), from Alan Turing's pioneering question "Can machines think?" to the game-changing advancements of modern AI. Discover how NLP evolved from early rule-based systems and statistical methods to the revolutionary introduction of machine learning, deep learning, and OpenAI's GPT-3. Tune in to understand how these milestones have transformed machines' ability to understand and generate human language, making our tech experiences smoother and more intuitive.
Connect with Emily Laird on LinkedIn
![What is Natural Language Processing (NLP)?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Thursday Jun 27, 2024
What is Natural Language Processing (NLP)?
Thursday Jun 27, 2024
Thursday Jun 27, 2024
Let's explore Natural Language Processing (NLP). Picture this: you’re chatting with your phone, asking it to find the nearest pizza joint, and it not only understands you but also provides a list of places with mouth-watering photos. That’s NLP in action. We'll explain how NLP allows machines to interpret and respond to human language naturally, like teaching a robot to be a linguist. Discover its key applications, from virtual assistants and machine translation to sentiment analysis and healthcare. Tune in to learn why NLP is the magic making our interactions with technology smoother and more intuitive.
Connect with Emily Laird on LinkedIn
![Transformers Mini Series: How do Transformers Process Text?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Wednesday Jun 26, 2024
Transformers Mini Series: How do Transformers Process Text?
Wednesday Jun 26, 2024
Wednesday Jun 26, 2024
In this episode of Generative AI 101, we explore how Transformers break down text into tokens. Imagine turning a big, colorful pile of Lego blocks into individual pieces to build something cool—this is what tokenization does for AI models. Emily explains tokens, and how they work, and shows you why they’re the magic behind GenAI’s impressive outputs. Learn how Transformers assign numerical values to tokens and process them in parallel, allowing them to understand context, detect patterns, and generate coherent text. Tune in to discover why tokenization is important for tasks like language translation and text summarization.
Connect with Emily Laird on LinkedIn
![Transformers Mini Series: How do Transformers work?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Tuesday Jun 25, 2024
Transformers Mini Series: How do Transformers work?
Tuesday Jun 25, 2024
Tuesday Jun 25, 2024
In part two of our Transformer mini-series, we peel back the layers to uncover the mechanics that make Transformers the rock stars of the AI world. Think of this episode as your backstage pass to understanding how these models operate. We’ll break down the self-attention mechanism, comparing it to having superhuman hearing at a party, and explore the power of multi-head attention, likened to having multiple sets of ears tuned to different conversations. We also delve into the rigorous training process of Transformers, from the use of GPUs and TPUs to optimization strategies. Connect with Emily Laird on LinkedIn
![What is a Transformer?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Monday Jun 24, 2024
What is a Transformer?
Monday Jun 24, 2024
Monday Jun 24, 2024
In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.
Connect with Emily Laird on LinkedIn
![Deep Learning Mini Series: What are Recurrent Neural Networks (RNNs)?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Friday Jun 21, 2024
Deep Learning Mini Series: What are Recurrent Neural Networks (RNNs)?
Friday Jun 21, 2024
Friday Jun 21, 2024
In this episode of our deep learning mini-series, we explore Recurrent Neural Networks (RNNs). Imagine reading a mystery novel, keeping track of all the clues and characters—RNNs are like your super-intelligent reading buddy, remembering past events to make sense of the present. Perfect for processing sequences of data like text and speech, RNNs are valuable where context matters. We’ll explore their key components, such as recurrent layers and hidden states, and see real-world applications from language translation to financial forecasting. Connect with Emily Laird on LinkedIn
![Deep Learning Mini Series: What are Convolutional Neural Networks (CNNs)?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Thursday Jun 20, 2024
Deep Learning Mini Series: What are Convolutional Neural Networks (CNNs)?
Thursday Jun 20, 2024
Thursday Jun 20, 2024
In our latest deep learning mini-series episode, we unravel the mysteries of Convolutional Neural Networks (CNNs). Imagine you're at an art gallery with a robot that can analyze every brushstroke and tell you what the artist had for breakfast. That’s CNNs for you—the eagle-eyed inspectors of the neural network family, adept at interpreting visual data. We’ll examine their key components, such as convolutional, pooling, and fully connected layers, and explore real-world applications from facial recognition to self-driving cars. Tune in to understand how CNNs transform AI by making sense of the visual world with stunning accuracy.
Connect with Emily Laird on LinkedIn
![Deep Learning Mini Series: What is a Neural Network?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Wednesday Jun 19, 2024
Deep Learning Mini Series: What is a Neural Network?
Wednesday Jun 19, 2024
Wednesday Jun 19, 2024
In this episode of Generative AI 101, we kick off our deep learning mini-series with Neural Networks 101. Think of neural networks as the brain behind the operation, minus the forgetfulness. We’ll break down the basics, from neurons and layers to weights and biases, and explain how these algorithms mimic the human brain. We’ll also dive into real-world applications like voice assistants, self-driving cars, and spam filters. Join us for an entertaining and insightful journey into the foundational elements of neural networks.Connect with Emily Laird on LinkedIn
![Machine Learning Mini Series - What is Reinforcement Learning?](https://pbcdn1.podbean.com/imglogo/image-logo/18828923/GenerativeAI-101-Cover_4c35z7_300x300.jpg)
Tuesday Jun 18, 2024
Machine Learning Mini Series - What is Reinforcement Learning?
Tuesday Jun 18, 2024
Tuesday Jun 18, 2024
In this episode of our machine learning mini-series, we explore the world of Reinforcement Learning (RL). Think of RL as the rebellious teenager of the machine learning family, eager to learn through trial and error. We’ll break down the basics: from agents and environments to actions, rewards, and policies. Using engaging analogies like training a dog or a game show contestant, we’ll explore real-world applications, including self-driving cars, video games, robotics, and marketing. Plus, we'll discuss the challenges of balancing exploration with exploitation and the hefty data requirements that make RL both fascinating and formidable.
Connect with Emily Laird on LinkedIn
![Image](https://mcdn.podbean.com/mf/web/mfipqzmtu4vvagir/EmilyLaird-Website-Ad.jpg)
Lecturer + Speaker
Transform your business with Emily Laird's captivating presentation on Generative AI. An AI expert and dynamic speaker, Emily breaks down complex concepts with ease and entertainment. Perfect for businesses and organizations eager to discover AI's potential.