Dr. Kathy Meier-Hellstern is the principal engineer and director of the Responsible AI and Human-Centered Technology organization in Google Research. Her current focus is on responsibility in generative models, with an emphasis on model evaluation and mitigation techniques. She recently spoke at the Business Ethics Alliance luncheon in Omaha about artificial intelligence and the future of business.
Here are a few key takeaways from the event.
- Generative AI is taking off. Generative AI is the latest evolution of artificial intelligence developed with models that produce outputs that are not confined by pre-written rules. Generative AI includes large language models, like ChatGPT and Google Bard, that are capable of generating new content. That content could be in the form of text, audio, image, video or other information. The combination of prompting and a user interface now makes AI accessible to the average person. Generative AI uses machine learning techniques, specifically deep learning, which relies on a neural network that is modeled after the human brain. These systems generate outputs that are not just copied or regurgitated from existing data, but produce novel and creative outputs.
- AI model outputs are only as good as their inputs. Artificial intelligence models are trained using data inputs. More data inputs results in more capabilities. However, these models are susceptible to bias because — whether intentional or not — the data inputs are biased. There’s a current opportunity to teach the models to have more balanced outputs using continuous improvement. Drawing testers from diverse backgrounds and geographies is one way to mitigate bias and reduce harm in AI.
- AI will disrupt the economy. It’s similar to how the internet disrupted the economy. The workforce will need to be trained on new skills in order to thrive in the new environment.
- AI still requires human oversight. AI can still “hallucinate” or make up untrue information. AI is advancing quickly, and we’re still 10-20 years away from realizing its full potential. Mission critical tasks that are automated by AI still need to be monitored by humans. And some functions, like making judgements, should always be performed by humans.
- There’s no such thing as “ethical AI”. AI is a model so it can’t be ethical or unethical. People can use the tools in a way that is ethical or not. Training data needs to be representative and diverse at the start in order to reduce the potential for harm. Implementing regulations at the federal level could provide further protections from bad actors.
Interested in learning more about AI in Nebraska? Check out these Nebraska Meetups: AI Omaha and Lincoln AI.
Thanks to our sponsor