Focusing on pressing topics around artificial intelligence and related areas with easy to digest content, guest and expert interviews, and insightful commentary that cuts through the hype and noise to identify what is really happening in the world of AI.
Experimenting, testing, and refining your prompts are essential. The journey to crafting the perfect prompt often involves trying various strategies to discover what works best for your specific needs. A best practice is to constantly experiment, practice, and try new things using an approach called “hack and track”. This is where you use a spreadsheet or other method to track what prompts work well as you experiment.
Plugins for Large Language Models (LLMs) are additional tools or extensions that enhance the LLM’s capabilities beyond its base functions. In this episode hosts Kathleen Walch and Ron Schmelzer discuss this topic in greater detail.
Can I use plugins with ChatGPT?
Plugins can access external databases, perform specific computations, or interact with other software and APIs to fetch real-time data, execute code, and more.
Continue reading Prompt Engineering Best Practices: Using Plugins [AI Today Podcast] at Cognilytica.
As folks continue to use LLMs, best practices are emerging to help users get the most out of LLMs. OpenAI’s ChatGPT allows users to tailor responses to match their tone and desired output goals. Many have reported that using custom instructions results in much more accurate, precise, consistent, and predictable results. But why would you want to do this and why does it matter?
Companies of all sizes in every industry are looking to see how Artificial Intelligence (AI), machine learning (ML), and cognitive technology projects can provide them a competitive edge. They want to provide efficiencies and improve ROI in today’s competitive landscape. As a result, this creates tremendous opportunity in the field of AI for professionals who are CPMAI certified and follow the CPMAI methodology.
To improve the reliability and performance of LLMs, sometimes you need to break large tasks/prompts into sub-tasks. Prompt chaining is when a task is split into sub-tasks with the idea to create a chain of prompt operations. Prompt chaining is useful if the LLM is struggling to complete your larger complex task in one step.
LLMs are basically big “text predictors” that try to generate outputs based on what it expects is the most likely desired output based on what the user provides as the text-based input, the prompt. Prompts are natural language instructions for an LLM provided by a human so that it will deliver the desired results you’re looking for.
AI is helping to re-imagine experiences of all sorts, including air travel. At the 2024 SXSW Conference and Festivals Bernadette Berger, who is Director of Innovation at Alaska Airlines presents on “The Sky’s the Limit: How AI will Re-imagine Airports”. In this episode of the AI Today podcast hosts and AI thought leaders Kathleen Walch and Ron Schmelzer have the opportunity to interview Bernadette.
AI is having an impact on every industry, including healthcare. AI’s impact in the practice of medicine is helping to reshape the practice of medicine both now, and in the future. In this episode of the AI Today podcast we interview Dr. Jag Singh. He is a Professor of Medicine at Harvard Medical School, focusing on the application of AI in the practice of medicine.
During the SXSW 2024 event, Wei Li presented on AI Everywhere with Software and Hardware. In this episode of the AI Today podcast we interview Wei Li. He is VP/GM of the AI Software Engineering Team at Intel.
He shares with what insights from that talk and what he means by AI being everywhere in both hardware and software.
As organizations continue to adopt AI the idea of innovation and sustainability are becoming important conversations. Intel wants to accelerate AI adoption by lowering barriers to entry for customers. In this episode of the AI Today podcast we interview Nuri Cankaya. He is the VP of AI Marketing at Intel.
Nuri sheds light on the myriad challenges faced by companies as they navigate the integration of AI with real-world data.
AI is having an impact on just about every industry and healthcare is no exception. In this episode of the AI Today podcast Cognilytica AI thought leaders Kathleen Walch and Ron Schmelzer interview Dr. Jesse Ehrenfeld. He is President of the American Medical Association (AMA). He also recently spoke at the 2024 SXSW Conference and Festivals.
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.