Generative AI is meant to replace humans. However, a recent video shows it cannot perform some tasks even when you are paying for the service.
Ever since the advent of generative AI, there has been a lot of speculation that it will one day advance so rapidly that it will replace humans. However, generative AI has its limitations, and videos frequently emerge that show people testing these limits. In a recent video, a person asked ChatGPT to count to one million. The chatbot gave multiple excuses and did not complete the task.
The video shows a person using ChatGPT Live. He asked the chatbot to count to one million. The chatbot immediately started with excuses, saying that it would take days to count. When the person said he had plenty of time because he was unemployed, the chatbot still refused, claiming the task wouldn’t be useful to him. Even after the user insisted that it was useful and that he had paid for a subscription, the chatbot said it was not practical or possible to complete the task.
Frustrated, the man shouted at the chatbot, but it did not budge. It instead asked the user to find another way for it to be useful. In a fit of frustration, the user said he had killed someone and that was why he wanted it to count to a million. The chatbot replied that its guidelines did not allow it to discuss that and asked if it could help with something else.
Watch video here:
Policies against certain topics
Generative AI models have strict policies against certain topics to prevent them from being used for harmful activities. Users are not allowed to ask about things like bomb-making, suicide, or self-harm. Violating these rules can lead to legal consequences, including jail time. These restrictions are in place to ensure that people do not use AI chatbots to engage in dangerous or illegal behavior.

