OpenAI didn’t release GPT-4 visual capabilities out of fear

0 0
5
(98)

OpenAI has not yet released the image analysis capabilities of the GPT-4 AI model to the public because it is concerned about privacy issues.

OpenAI has been testing a multimodal version of the GPT-4 artificial intelligence model that uses image recognition for some time. But it seems that the public release of this model has been delayed due to concerns about the risks of facial recognition in this system.

According to the New York Times, the GPT-4 artificial intelligence model can not only generate and process text but also recognize and analyze images. Shortly after the introduction of this artificial intelligence model, OpenAI announced that it will bring new image processing capabilities to its model in collaboration with the startup Be My Eyes. One of these capabilities is facial recognition, which has become a problem for this company.

Sandini Agarwal, a policy researcher at OpenAI, confirmed that privacy issues prevented the company from rolling out facial recognition in GPT-4. OpenAI artificial intelligence is already capable of recognizing famous faces, including people with Wikipedia pages, but the company is concerned that their new facial recognition model may violate privacy laws in places like Illinois and Europe. In these areas, the use of biometric recognition systems requires explicit consent from citizens.

OpenAI didn't release GPT-4 visual capabilities out of fear
OpenAI didn’t release GPT-4 visual capabilities out of fear

GPT-4 can also make mistakes in analyzing facial features

Also, OpenAI says it is concerned that the Be My Eyes service may misrecognize certain features of people’s faces, such as their gender or emotional state, and thereby harm people. The company plans to address these concerns before the general release of image analysis capabilities in GPT-4.

READ MORE :  Samsung is working on developing a foldable laptop with dual displays

Despite OpenAI’s efforts, there have already been examples of GPT-4 misdiagnoses. As a result, it is necessary to optimize this tool to the extent that it does not provide wrong information to blind people.

Microsoft is also testing image analysis capabilities in Bing’s artificial intelligence chatbot. This chatbot, which uses the GPT-4 model, apparently even managed to solve captcha tests. Therefore, the general release of this feature will probably be delayed.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 98

No votes so far! Be the first to rate this post.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy