ChatGPT Banned At Apple

Tech News : ChatGPT Banned At Apple

Apple has reportedly banned the internal use of ChatGPT and other chatbots plus AI writers like Bard, Copilot and GitHub to prevent the sharing of sensitive company information. 

Internal Document Seen 

The ban came to light following a report by the The Wall Street Journal, which said it had seen internal documents informing Apple employees of the ban. 

Why? 

The main reasons for the extra secrecy measures from Apple are that: 

– There are rumours that Apple is either working on its own generative AI or is making secret technical improvements to Siri to help it compete with Google and Amazon’s digital assistants. 

– Confidential data fed into AI chatbots is often used to further train them and can, therefore, be revealed if someone asks the chatbot similar questions. Also, possible bugs in chatbots, such as the one (discovered in March) in an open-source library, can lead to sensitive information being revealed. This fear of revealing company secrets and possible security issues is why many tech companies and banks (such as Amazon, JPMorgan Chase, Bank of America, Citigroup, and Deutsche Bank) have all banned internal usage of ChatGPT. Other companies, such as Samsung, have imposed a character count and other limitations on the use of AI chatbots for the same reason. 

What Does ChatGPT Say? 

Users of ChatGPT are warned upon login anyway that ‘conversations may be reviewed by our AI trainers.’ 

Also, last month, addressing fears of ChatGPT having the potential to reveal commercially sensitive information, OpenAI announced that it has introduced the ability to turn off the chat history in ChatGPT and that conversations started when chat history is disabled won’t be used to train and improve its models, and won’t appear in the history sidebar.  

OpenAI also said that when chat history is disabled, it will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting them. 

Working On A New Business Version

In the same announcement, OpenAI said it’s working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. This business version will follow its API’s data usage policies, so that end users’ data won’t be used to train OpenAI’s models by default. 

Call For Regulation of AI 

Recently, OpenAI’s CEO, Sam Altman, told the US Senate judiciary committee that he supported the use of regulatory intervention by governments to mitigate the risks of increasingly powerful models. 

What Does This Mean For Your Business? 

It has been known for months that, due to how they are trained (and the possible bugs in this relatively new technology), ChatGPT and other AI chatbots have the potential to reveal sensitive information that’s been inputted by users. This is why many big companies have moved to close this risk-loophole by simply banning the internal use of ChatGPT. OpenAI has been relatively transparent about the way its AI chatbot is trained and the possible risks and its CEO has himself publicly supported regulation of AI as the technology moves forward at an alarming pace. Also, as highlighted above, OpenAI has introduced measures around control of ChatGPT history by users and promised a business version that gives users more control over their data. For businesses that are particularly concerned about privacy and security issues in the use of chatbots for work, the safest guidance for now may simply be to ban the use of an AI chatbot or introduce controls on what can be inputted and how. 

Sponsored

Ready to find out more?

Drop us a line today for a free quote!

Mike Knight