Walmart Is Warning Employees Against ChatGPT, Is The AI Tool More Dangerous Than We Thought?
Walmart has released its ban on employees using AI technology, ChatGPT, but employees must now follow certain rules and not post any private information to the chatbot.
Over the past few weeks, it seems like all anyone can talk about online is ChatGPT. Writers, in particular, are worried about AI writing movies and stealing their writing jobs, and everyone else is kind of worried that there is apparently an “evil” version of ChatGPT. Now, according to Insider, Walmart, of all places, has joined the fray by asking employees to “avoid inputting any sensitive, confidential, or proprietary information” about Walmart into ChatGPT.
Yesterday, Walmart sent out an internal message to all of its employees, asking them not to share any information about the company with ChatGPT or other similar AI technology. Before that, the company had blocked the use of the AI software completely after it discovered “activity that presented risk to our company.”
Since then, Walmart decided to take another approach and revise its original out-and-out ban. Now, employees are allowed to use ChatGPT, but only if they follow certain rules and are very mindful about what information they are sharing. Amazon (which is already publishing books written by AI) and Microsoft have both recently issued similar warnings about using this new technology, which apparently has as many drawbacks as it does advantages.
Walmart employees have been asked not to input any personal information about either employees or shoppers and not to divulge financial information. Basically, anything that could be considered sensitive information, confidential knowledge, or information about Walmart’s business should not be put into ChatGPT or other AI tools.
Inputting this type of information can pose a risk, not just to the personal privacy of employees and shoppers but also to the business of Walmart as a whole. If someone were to put information about Walmart’s “business process, policy, or strategy” into ChatGPT, this could become public knowledge, which could impact business.
Coding is also a big issue. Walmart asks employees not to put existing Walmart computer or website code into ChatGPT or other similar tools or to utilize them to create new code. This, again, would expose Walmart’s private business information, but that’s not all. This breaching of confidentiality could also lead to problems with Walmart having the rights to the code used, as well as the rights to any related “products, information, or content.”
Walmart’s memo stressed the importance of using ChatGPT “appropriately.” While it is a cool tool that can help workers to be more efficient and can lead to certain breakthroughs and new ideas, it can also be dangerous when not used with care and attention.
Walmart also emphasized that, even if one were to use ChatGPT for something other than these banned activities, that person must still be very careful to check the AI tool’s output for accuracy before using it. AI has come a long way, but it still isn’t perfect yet. You can’t just blindly trust what answers or information it gives you, as it is still learning and may not always be totally accurate.
So, in conclusion, it is not really ChatGPT that Walmart is worried about. It is how employees use ChatGPT. AI can be a useful tool for business, but only if you use it right and are attentive to the potential consequences of what you are doing.