Responsible Computing Announces Responsible Generative AI Whitepaper

Addressing the controversy surrounding GenAI technologies

Responsible Computing (RC ), a consortium of technology innovators working to address sustainable development goals, published a whitepaper entitled Responsible Generative AI. Generative AI (GenAI) technology has triggered concerns about its impact on society and the need for regulation. The Responsible Generative AI paper assesses GenAI technology and offers insights into how to use it responsibly.

New AI models that can provide generative capabilities train on massive amounts of data, including texts, images, audio, video, and structured data in tables and files. These models are good at recognizing patterns and can generate/synthesize new outputs based on what they train on. They can compose answers, create summaries, and develop new images, audio, and videos.

“GenAI technology amplifies human brainpower in the same way machines have automated many tasks in the workforce,” said Bill Hoffman, CEO and Chairman of Object Management Group. “GenAI can enable humans to tackle complex tasks such as discoveries, diagnosis, etc. It can enable better interfaces to tools, appliances, machines, and applications and even enhance creativity.”

Latest Hrtech Interview Insights: HRTech Interview with Lavonne Monroe, VP of Global Talent Acquisition and Onboarding at HPE

However, the trained AI model won’t be accurate when source material (such as from the internet) doesn’t reflect the truth. In addition, the generative part of an AI model uses a fill-in-the-blanks approach from learned patterns without validating domain principles, which could lead to the spread of misinformation and legal issues. Plus, the energy  large-scale GenAI compute uses can have sustainability and carbon footprint concerns.

The Responsible Generative AI whitepaper covers areas such as lack of trustworthiness, unfair impact on the labor force, issues with copyrights and IP, implications on human cognitive skills, and the need for regulation. “Tools and frameworks that measure accuracy, bias, ethical and non-harmful outputs, and traceability are in development and will be a welcome step towards adopting trustworthy GenAI models,” continued Hoffman.

Organizations can begin to use GenAI technologies by identifying areas that will benefit from AI-driven automation. Examples include one-to-one education, remote diagnoses, and translation services. It would help if you kept humans in the loop to supervise autonomic operations. And finally, organizations should retrain employees for the new professions and job opportunities GenAI affords.

Browse The Complete Interview About Hrtech : HRTech Interview with Tommy Barav, Founder and CEO at timeOS

 [To share your insights with us, please write to  pghosh@itechseries.com ]