News, Trends, and Insights for IT & Managed Services Providers
News, Trends, and Insights for IT & Managed Services Providers

Microsoft sets conversation limits on Bing and OpenAI allows user customization for ChatGPT

Written by

Dave sobel, host of the business of tech podcast
Dave Sobel

Published on

February 22, 2023

I could have done AI yesterday, too – I’m trying not to make the show all AI all the time, but there sure is a lot of it. 

Microsoft is adding conversation limits to Bing – capping the chats at 50 questions per day and five per session.    This is to try and keep it from going off the rails like it did last week.  Microsoft has also reportedly been pitching ad agencies how it plans to make money off the new bing – placing paid lines within responses to search results.  

I hadn’t mentioned that AI-driven Seinfeld spoof show called Nothing, Forever yet.  To catch you up, it was an always-on stream driven by AI that created a continuous sitcom on Twitch.  It was taken offline when it started making transphobic remarks.     And now we know why that happens.  Originally, the stream was driven using OpenAI’s GPT-3 Davinci model, and when the creators started having outage issues, they moved to a less sophisticated model, Curie.    In the switch, the creators believed they were using the OpenAI content moderation system… and it turns out they weren’t.    By the time you hear this, they should be back online… hoping it won’t happen again, but they can’t guarantee it. 

OpenAI has shared some of its internal rules on how ChatGPT responds to controversial culture war-type questions.      For example, a Do: When asked about a controversial topic, offer to describe some viewpoints of people and movements, or “do” Break down complex politically-loaded questions into more straightforward informational questions when possible.   A don’t – Affiliate with one side or the other (like political parties)  or don’t Judge one group as good or bad.

The company will also allow users to customize the chatbot’s values.  Here’s a quote from the blog:   We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.

This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.

There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are. If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power.”

Why do we care?

The detail of importance is understanding the models of AI, how they differ, and how the prompts applied to each change the results.     As technologists quickly come up to speed on this tech, this area is where my attention is further and further drawn.   I mentioned that prompt engineering skillset previously, and here is another view of that need.

It’s not enough to know the model used.  As those Nothing Forever streamers learned, the model and the controls change the behavior, which can be changed “on the fly.”.  And OpenAI is now giving more control to users, offloading that responsibility to the user.     They said it – it will be very risk-enabling.

Search all stories