Google cautions its own staff not to utilise Bard-generated code.

 


Google's own workers have been advised not to share sensitive information or utilise the code produced by Bard, their AI chatbot.

 

Artificial Intelligence (AI) making medical treatment highly effective and easy, take a look...

Also Read: Bing Discover, A New Way to Find Relevant Information and Content Powered by AI

The policy isn't shocking considering that the chocolate maker already amended its privacy notice and encouraged consumers not to discuss personal information with Bard. Similar warnings against disclosing confidential information and bans on the use of other AI chatbots have been issued by other major corporations to their workforce.

Is that AI have started thinking like Human? lets find out the answer

However, the internal Google cautionary message raises doubts about the reliability of AI tools developed by private companies, particularly if the inventors themselves don't use them because of privacy and security dangers.

Don't MissChat GPT-4: The Future of Conversational AI

Google's claims that its chatbot may increase developer productivity are refuted by the fact that it warns its own employees against directly using code produced by Bard. The leading provider of search and advertising informed Reuters that Bard's ability to emit "unwanted code suggestions" was the reason for its internal restriction. Problems may result in complicated, bloated software or malfunctioning programmes, which would take longer to rectify than if developers hadn't used AI at all. 

Read More:  The Future of AI: Democratization, Human-AI Collaboration, and Creative Applications