The AI bandwagon didn’t slow down much due to the holiday weekend.
Oracle has added AI to its Oracle Fusion Cloud Human Capital Management (HCM) system. Oracle adds that the generative AI capabilities will help HR employees generate customized text for job descriptions specific to the position and company, write requirements for job postings, summarize employee performance data from peers and managers for reviews, and generate suggestions tailored to the company culture.
Bing integration with ChatGPT was rolled back – the feature bypassed paywalls to access subscription content without a subscription.
Microsoft has launched what appears to be the first professional certificate for generative AI skills. As part of the company’s Skills for Jobs program, the new professional certificate on Generative AI will be given to anyone who takes the free classes on AI and passes the required exam. Available through LinkedIn Learning, the Career Essentials in Generative AI program offers a free course on generative AI
Also updated: a financial institution framework for evaluating the responsible use of AI. The assessment toolkit focuses on four fundamental principles: fairness, ethics, accountability, and transparency. It offers a checklist and methodologies for businesses in the financial sector to define the objectives of their AI and data analytics use and identify potential bias. Available on GitHub, the open-source toolkit allows for plugins to enable integration with the financial institution’s IT systems.
In that vein, a new transatlantic Responsible AI in Healthcare consortium, organized by the Austin-based Responsible AI Institute, launched on Wednesday at Cambridge University to help hospitals and other health providers use AI more safely. It’s backed by Harvard Business School and the UK’s National Health Service.
The American Medical Association voted to adopt a proposal on Tuesday to help protect patients against false or misleading medical information from artificial intelligence. The AMA will work with the Federal Trade Commission (FTC), Food and Drug Administration (FDA), and other regulatory agencies to mitigate GPT medical misinformation, according to the resolution proposed by the American Society for Surgery of the Hand and the American Association for Hand Surgery. The AMA will propose state and federal regulations for AI tools at next year’s annual meeting, according to a statement.
OpenAI and Microsoft were sued in a class action lawsuit for allegedly stealing “vast amounts of private information” from internet users without consent to train ChatGPT. This lawsuit, filed on June 28 in federal court in San Francisco, CA, claimed that OpenAI secretly “scraped 300 billion words from the internet” without registering as a data broker or obtaining consent.
In other legal news, New York’s new law to regulate AI and automation in hiring decisions went into effect Wednesday. The law, known as NYC 144, requires employers that use certain kinds of software to assist with hiring and promotion decisions—including chatbot interviewing tools and resume scanners that look for keyword matches—to audit those tools annually for potential race and gender bias and then publish the results on their websites.
And, in Maine, state-government agencies in Maine can’t use generative artificial intelligence tools like ChatGPT for at least six months, following a cybersecurity directive issued last week by the state’s IT agency, which cited concerns around the technology’s cybersecurity and privacy risks.
The FTC’s ability to enforce rules was profiled in Cyberscoop with algorithm disgorgement. Also referred to as model deletion, the enforcement strategy requires companies to delete products built on data they shouldn’t have used in the first place. For instance, if the commission finds that a company trained a large language model on improperly obtained data, it will have to delete all the information and the products developed from the ill-gotten data.
Why do we care?
Most of the stories I’m tracking are not technical. They are regulatory or ethical. In particular, I’m noting more and more frameworks for AI use. This should be great news – implementations are a service provider’s dream. Service financial customers? There’s a framework for that. Service healthcare? Ones is forming there. There are broad ones, and there are specific ones.
Because as you see with OpenAI and Microsoft, there will be need to show proper usage by customers to protect legally.