Is AI ready to run Amok (Part 1)? – Eejournal

Machine Learning


I know… I know…wherever we turn around these days, we present yet another story focused on artificial intelligence (AI) and/or machine learning (ML). Do you need one more? Well, I'm poised to write one, so I have to say yes (otherwise I have to go back and start again, and that's a future I'm not ready to accept).

Winging's song came to my mind by Wings (and it's not something I thought I'd hear myself say when I woke up this morning). I'm thinking Stupid love song Paul and Linda McCartney – That's the part to go Here I'll go again.

https://www.youtube.com/watch?v=ap87qgzktnw

So here we go to ourselves again. This all started when Steve Leibson, one of my colleagues at Eejournal, sent me an email with the subject line “AI Run Amok.” This led me to the story drive About Hertz, which uses AI to detect car rental damage. The idea is to drive through an AI-powered scanner when renting a car, on the way out of the Hertz facility and when you return the car to the lot. Just a few minutes after unloading the car, you will be warned about potential issues like worn wheels and present your bill.

In addition to the costs associated with correcting damages that AI claims to have detected, the bill also includes items that charge the driver the cost of detecting damages in the first place (and also includes additional management fees to brighten up everyone's day). If you want to challenge the claim to add a large amount of all-talented cream on top of a phorical cake, you will (a) give the ground, or (b) lock the corners with an AI chatbot that has no intention of connecting with humans.

It's been a while since I rented a car. Based on this new intelligence, it may be a long time before I do that again. I was still brain wrapped around this when I came across the column Futurism About Delta's announcement of an AI-based system that generates ticket prices on the spot. On the other hand, the concept of “dynamic pricing” has been around for quite some time. This refers to a strategy in which the price of a product or service is adjusted in real time based on a variety of factors, including supply (limited stocks can increase prices), demand (up at peak times), competitor pricing, and days of the week, days of the week, or seasonality.

There is also the concept of “personalized pricing.” Here, the cost of something is adjusted based on factors such as the location of the buyer (the price may be higher for those living in wealthy postal codes) and the device used (the user viewing on IPHONE may show a higher price than Android users). This can start to get uncomfortable by increasing the amount of time spent on web pages if prices are adjusted based on purchase history (if frequent buyers are likely to be charged more), browsing behavior (if they repeatedly look at the product, if the system infers strong intentions and raises prices).

What really scares me is the concept of “AI-based personalized pricing.” This is because AI-based systems can access and analyze huge amounts of personal data (location, income, devices, habits, search history, social media activities, etc.). Using this data, they can predict their willingness to pay and continually adjust pricing in real time based on your behavior and the behavior of people like you (and of course not that cool).

For me, this opens Pandora's box of miserable possibilities, including invisible discrimination (you may never know that you are being charged more than someone else, but you know what your place of residence and browsing habits suggest) and exploitation of vulnerabilities (AI can learn that when you rely on a particular brand, you tend to buy serious brands when you're tired).

Where will this end? Can you imagine a future that isn't far enough that two people arrive at checkout at the same supermarket, and have the same contents in their shopping cart and receive dramatically different bills? What are the limits? Do things like healthcare, insurance, and education services start pricing based on AI wealth forecasts?

We have a friend who we call John (because that's his name). John is retired and spends more time cheering on the interesting nuggets of knowledge and sharing with his friends than good. I asked John to keep my eyes open for “AI Run Amok” type articles and the floodgates were opened!

One of John's first offerings offered an interesting counterpoint for what we have discussed up to now. Bruce Schneyer is a well-known engineer and author in the fields of cybersecurity, encryption and privacy. John pointed out the article in the title to me What LLMS knows about users On the Blues website. If I hadn't been worried before, I'd certainly be here. As one of the commenters in this article said, “…the current business plan for AI KLM and ML systems is bedazzle, attractive, bewity, befried, and betrayal.” The only thing that keeps me going is that people in charge of government are knowledgeable, wise and have our backs… ah, wait…

I remember how excited I was to hear it George Santos was appointed to Science, Space and Technical Committee. (Hmm, is the word I'm looking for really “excited”?) I'm like astronaut Scott Kelly, “it's great to have a former NASA astronaut and Moon Walker and George Santos, the House Science Space Technology Committee president. But we're off track…

Like many others, the problem with AI is that it is a double-edged sword. For example, take a look at AI Assistant. For example, copying phones and work meetings into AI is great. It's also convenient for AI to have human-like conversations and perform tasks on your behalf, such as schedule appointments or book a restaurant. What's exciting is when AI grants permission to do one thing (such as accessing the calendar to open a browser), it uses that access to go beyond mandates such as digging for passwords, digging into browsing history, contacts, etc. John pointed out an eye-opening article to me. TechCrunch Regarding issues related to granting AIS access to personal data.

On the other hand, as always, I can look at (and debate) both sides of the discussion. For example, I recently read the title book The last man Zach Jordan. The scene is set almost in the millennium after the destruction of humanity. This action takes place in a distant interstellar civilization ruled by vast collective intelligence known as networks. All intelligent species in the network connect through brain implants, allowing for telepathic communication, shared knowledge, and the constant presence of personal assistants called helpers. These helpers act as AI advisors in their heads and perform tasks such as information, emotional support, research and tracking. We must acknowledge the way these helpers were drawn.

For a laugh and a smile, I asked ChatGpt to provide feedback on this column. It paused (just for the effect, I'm sure), then replied: “You have nothing to worry about. I'm here to help you. Always.” Well, it's certainly encouraging (I think).

There's a lot to discuss, but I'll do that in future columns. Until that frabjous day, is there anything you would like to share about this?





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *