Before we begin…
Welcome to the WorkplaceTech Pulse, presented by SHRM Labs. We are expanding our resources to bring you the best possible information from leaders in HR technology and transformation.
My name is Nell Hellem, innovation catalyst at SHRM Labs. You will hear from me as well as my colleagues every other week with the release of each new edition. Let us know any topics you’d like to hear about related to workplace tech and we will consider them for future editions of the WorkplaceTech Pulse.
*This edition of the WorkplaceTech Pulse was written by Trevor Schachner, former Product Manager and Workplace Innovation Specialist at SHRM Labs. We hope you find his insights valuable.
Introduction
Now, a little over a year after the introduction of ChatGPT, we find ourselves in a very different AI landscape. With multiple competitors all vying for market share, enterprise contracts for generative AI tools, and more legal and government scrutiny than ever, we are in for an eventful few years of AI evolution.
We are diving deep on generative AI in this edition and getting into some of the technical details of generative AI. You are probably wondering, “Why does someone in HR need to know this?” It is important to understand what makes up the technology we use every day, especially when it affects organization-wide decisions like talent, operations, and strategy. And as AI is embedded into more and more of our daily HR processes, we need to stay educated about how it works. Understanding the technology is one part of the equation, but the second is finding the right opportunity to use it. We are also going to look at how one HR team has implemented generative AI tools into their workflow and how it has impacted their organization. Let's dive in.
Be sure to check out all of our editions of the WorkplaceTech Pulse!
AI versus Generative AI
Artificial intelligence has been thought possible since 1950 when Alan Turing, widely regarded as the father of theoretical computer science and artificial intelligence, published a paper called Computing Machinery and Intelligence, where he famously wrote: “I propose to consider the question, ‘Can machines think?’”. The answer, eventually, was yes.
The first AI program was created in 1956 by three scientists, Herbert Simon, Allen Newell and Cliff Shaw. Their program, called the Logic Theorist, was developed to solve mathematical proofs. They had demonstrated that a machine could think like a mathematician. AI research made substantial progress over the next 20+ years with computer memory and storage being the main limiting factor. In 1997 this was no longer the case and an AI program called Deep Blue, created by IBM to play chess, defeated the chess grandmaster Gary Kasparov. I include all of this background because it is important to know that AI has been around for a long time now.
AI in the traditional sense is just a computer program that has been given a certain set of rules to process information and data and produce a specific result. It can be improved over time with reinforcement learning and fine-tuning, but does not create new and novel content and problem solving methods. Traditional AI can be seen at work in everyday life from Netflix and Google recommendations, to Alexa and Siri (as of writing this in 2023).
Generative AI is the next frontier.
Generative AI
Generative AI (GenAI), much like Traditional AI, is not new and has been researched since the 1960s. There were three recent innovations in machine learning that have allowed GenAI to boom. The first was a breakthrough in 2014, called Generative Adversarial Networks, which allowed GenAI to create authentic photos, videos, and audio. The second, called Transformers, allowed researchers to train large models without labeling and organizing data. The third advancement, called Large Language Models (LLMs), allows machines to understand the relationships between words and understand the context of text inputs. LLMs power many of the AI tools we interact with today. We won’t go much deeper into these technical advancements, but it is important to know that these advancements each played a role in advancing GenAI to where it is today.
So how does it work? The main components of an AI tool are an interface, a model, an input, and an output.
The Interface:
This is how a user will see and interact with the AI tool. This could be a website, an application, a chatbot, or even an interface that is not seen by users. Depending on the application, GenAI could run in the background and only the output will be visible to the user. If you visit ChatGPT, the webpage you land on is simply the interface for the AI tool.
Interfaces play a huge role in how we use and interact with technology. Before 2023, GenAI could be used, but was reserved for only those technically savvy enough to send and receive information via APIs( an API is a way to send and receive information between applications). The introduction of user interfaces, like those seen in ChatGPT or BARD, has allowed more people to easily interact with these AI tools.
The Model:
This is how the AI tool will process information and provide a useful output to users. Models are designed to mimic how the human brain functions. Using many machine learning algorithms, these models are created to process, learn, and make decisions based on data. There are many different types of models such as Linear Regression, Deep Neural Networks, Logistic Regression, Decision Trees AI and many more. GPT4, LaMDA, LLaMA, DALL-E2 are other models that have been popularized through their use in AI tools. The model is very important depending on the application as some models will be better suited for text processing, while others will excel at image and video. Models can also be trained on proprietary data to provide industry, or organization specific results.
The Input:
Depending on the model and interface, this will be how the AI ingests information. It could be text, audio, video, images, or a combination of data types. This is important because each model will react differently to different inputs. For example, when using ChatGPT or another text-focused AI tool, it is possible to get two completely different responses based on how a question is worded, or how much context I provide alongside the question. Also, a tool like ChatGPT would react very differently if I were to upload an image than an AI tool like DALLE that is built to handle these types of inputs (although as of writing this, the two work hand in hand through one interface). For many years the input was a huge hurdle for using AI, but with LLMs, we are now able to use common language to communicate with these models.
The Output:
The output of an AI tool can vary widely based on the input and the mode, but this is where an expected result is formulated. The AI tool takes the input, runs it through the model, and outputs information that is synthesized from the model's knowledge and the input provided. This is where the term “generative” comes into play. Without powerful models, AI could only do a specific operation, but now AI is able to generate and create new information.
Large Language Models
One of the core components of Generative AI (GenAI) in HR is Large Language Models (LLMs). These advanced computer programs are trained on massive datasets of text and code, enabling them to understand the nuances of language, analyze writing styles, and even generate creative text formats.
Here's how LLMs work and their potential within the HR landscape.
Understanding the Engine: Tokens and Predictions
LLMs operate by breaking down text into smaller units called tokens. Think of tokens as the building blocks of language, similar to how words are the building blocks of sentences. An LLM dissects a sentence like "The quick brown fox jumps over the lazy dog" into individual tokens, each representing a word.
The true magic lies in prediction. LLMs excel at predicting the next token in a sequence. After analyzing the first few tokens (e.g., "The quick brown fox"), the LLM predicts the most likely word to follow (e.g., "jumps"). This prediction is based on the vast amount of text data the LLM has been trained on, allowing it to understand the context and relationships between words.
This continuous prediction process empowers LLMs to perform a multitude of tasks including:
Summarization: LLMs can efficiently summarize lengthy documents like job descriptions, performance reviews, or candidate applications. Imagine an LLM condensing a lengthy job description into key bullet points highlighting essential qualifications and responsibilities, saving HR professionals valuable time and effort. (Source:
Information Extraction: LLMs can be instructed to extract specific details from text using prompts. For example, an HR professional can prompt an LLM to find specific skills or experiences mentioned in a resume, streamlining the screening process.
Personalized Communication: LLMs can personalize communication with candidates and employees through various prompts. Imagine an LLM generating personalized emails tailored to individual candidates based on their qualifications and the specific job role, enhancing the candidate experience.
It's important to remember that LLMs are still under development, and ethical considerations like bias and factual accuracy need to be addressed before widespread implementation in HR practices. However, the potential of LLMs to streamline tasks, improve efficiency, and personalize the HR experience is undeniable. As LLM technology continues to evolve, HR professionals can leverage this powerful tool to shape the future of work and create a more efficient and effective talent management process.
Beyond the Basics: Generative Power for HR
The true potential of LLMs lies in their generative capabilities. By understanding the context and relationships between words, LLMs can be prompted to create information. Here are a few common use cases:
Job Description Writing: LLMs can be used to generate initial drafts of job descriptions based on specific criteria provided by HR professionals. This can significantly reduce the time and effort required for creating accurate and engaging job descriptions.
Interview Question Generation: LLMs can be prompted to generate interview questions based on the specific job role and desired candidate skills. This can help HR professionals create a more targeted and effective interview process.
Performance Feedback Creation: LLMs can be used to generate initial drafts of performance feedback, taking into account specific employee data and performance metrics. This can provide HR professionals with a starting point and ensure consistent and objective feedback.
While the aforementioned functionalities showcase the versatility of LLMs, effectively harnessing their potential hinges on prompt engineering (we have a whole article dedicated to this topic here). Prompt engineering refers to the art of crafting clear and concise instructions that guide LLMs towards achieving the desired outcome. Just like providing clear instructions to a colleague yields better results, well-designed prompts are essential for maximizing LLM effectiveness in the HR domain.
By understanding the different LLM operations – reductive, transformative, and generative – you can tailor prompts to achieve specific goals:
Reductive prompts: Reductive operations function like information filters, condensing larger inputs into a smaller, more focused output. They excel at extracting key elements, such as summarizing a lengthy report or identifying specific data points like names, dates, or locations within a text. Additionally, reductive operations can categorize text based on its content or style, like classifying an email as a complaint or a job application. Example: Imagine needing to summarize lengthy performance reviews for senior management. An effective prompt could be: "Summarize the key strengths and weaknesses identified in this performance review, focusing on areas for improvement." This prompt instructs the LLM to condense the information while highlighting crucial aspects.
Transformative prompts: Transformative operations manipulate the form and presentation of information without altering its core meaning. They excel at reshaping the input without changing the underlying content. Imagine transforming a block of text into a table, a bulleted list, or a timeline. Translating languages and altering writing styles to be more formal, casual, or persuasive also fall under this category.
Example: Transforming lengthy job descriptions into bulleted lists of key qualifications and responsibilities can be achieved through a prompt like: "Convert this job description into a bulleted list highlighting the essential skills and experience required for the position." This prompt instructs the LLM to restructure the information while preserving its core meaning.
Generative prompts: Generative operations are the true creators, expanding upon a given input to generate entirely new content. These operations use the provided information as a springboard to produce original outputs like articles, emails, creative writing pieces, or even code. They can also brainstorm novel ideas, generate outlines, or further explore a topic by adding details and examples.
Example: Brainstorming new training materials can be facilitated by a prompt like: "Generate creative training content on effective communication skills, incorporating real-world scenarios relevant to our organization." This prompt encourages the LLM to generate original content based on the provided parameters.
By mastering prompt engineering, HR professionals can unlock the full potential of LLMs and seamlessly integrate them into various workflows. This empowers them to streamline tasks, enhance efficiency, and personalize the HR experience, ultimately contributing to a more effective talent management process.
Conclusion
The field of GenAI is rapidly evolving, with new advancements emerging at an unprecedented pace. It's impossible to predict the exact trajectory of this technology in the years to come. However, one thing is certain: GenAI's influence will continue to grow, fundamentally reshaping how we work. As HR professionals, staying informed about these developments and actively seeking opportunities to integrate responsible GenAI solutions will be critical to navigating and preparing for the future of work.
References
- https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?sh=3ee99d8d508a
- https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
- https://history-computer.com/logic-theorist/
- https://blog.stackademic.com/understanding-the-difference-between-gpt-and-llm-a-comprehensive-comparison-1f624c713507#:~:text=GPT%20models%20excel%20at%20text,range%20of%20language%2Drelated%20tasks.&text=LLMs%20encompass%20a%20variety%20of,on%20the%20specific%20LLM's%20objectives.
- https://www.techtarget.com/searchenterpriseai/definition/generative-AI#:~:text=Generative%20AI%20focuses%20on%20creating,many%20types%20of%20new%20outputs.
- https://dataconomy.com/2023/04/04/best-ai-models-types-how-to-choose-what-is/
- https://scs.georgetown.edu/news-and-events/article/9402/generative-ai-can-help-recruiters-shouldnt-replace-them
- https://analyticsindiamag.com/leena-ai-unveils-worklm-its-proprietary-large-language-model/
- https://joshbersin.com/2023/03/the-role-of-generative-ai-and-large-language-models-in-hr/
- https://www.capellasolutions.com/blog/leveraging-llms-for-enhanced-human-resource-management
- https://medium.com/@andrew_johnson_4/harnessing-large-language-models-for-human-resource-management-7390d193c608
- https://www.youtube.com/watch?v=aq7fnqzeaPc
SHRM Labs, powered by SHRM, is inspiring innovation to create better workplace technologies that solve today’s most pressing workplace challenges. We are SHRM’s workplace innovation and venture capital arm. We are Leaders, Innovators, Strategic Partners, and Investors that create better workplaces and solve challenges related to the future of work. We put the power of SHRM behind the next generation of workplace technology.