Some thoughts on AI

When I see myself and those around me, whenever we face a problem, we no longer turn to Google, nor do we check Wikipedia; instead, we ask AI.

I am a programmer, and I mainly ask about code and various other technical issues. AI provides me with over 90% accuracy. Sometimes, when it's not accurate, I ask it again until it gets it right. It's an interactive process that benefits both me and the AI.

However, 90% accuracy means it's not always correct, and that margin of error leaves me with concerns.

1. Will time help AI reach 100% accuracy?

I don't think so. Knowledge is not static; new software, new information, and new technologies are always emerging. AI learns from human mistakes to improve itself. As long as humans make mistakes, AI will continue to provide incorrect answers.

2. Will AI take away human jobs?

I believe it will.

The First Industrial Revolution, when James Watt invented the steam engine, liberated human labor, and people destroyed machines due to job losses. However, back then, machines only replaced physical labor, not intellectual work. Moreover, for every job a machine took, a new job was created for humans. Everything was still fine. But what about AI? It can sing, converse, answer all questions, and offer solutions. It can even perform creative tasks like painting or composing music. The question is, does AI create jobs for humans? So far, it has, but not in large numbers. One of the new jobs AI has created is Prompt Engineer—writing prompts for AI to process.

As AI becomes more advanced, it will take away more human jobs, especially in the next decade.

3. Is AI a solution to every problem?

Of course not.

AI isn't always correct, and some issues require social interaction. AI can explain code and find errors, but it can't teach you how to think or be creative. Those are things you need to learn from a real human teacher.

4. What can we do?

Be humble and respect the gods.

In an interview, physicist Stephen Hawking once said that AI could be a threat to the survival of humanity. I believe this concern is justified. If AI isn't controlled and tied to ethical standards, humanity will face a larger problem. Machines don't think or weigh ethics like humans do. I watched a program where in an interview, it was mentioned that the U.S. was incorporating AI into tanks. It could find enemies faster, but the decision to fire was still in human hands. What would happen if that firing decision was handed to AI?

Today, I feel a certain dependency on AI, just as there was once with Google.

I think humans need to develop AI more responsibly. To do that, governments must intervene. Especially since, from centuries of experience, we know corporations care only about profit, not ethics.

Comments

Popular posts from this blog

Cách vượt qua cơn buồn chán hay là bạn đang quá rảnh mà không biết làm gì.

Bức xạ Hawking

Làm sao review sách mà không cần đọc