Table of Contents
Artificial Intelligence
Artificial Intelligence (AI) is a field of computer science that intends to let machines and programs emulate human cognition, a concept that has existed since the 1950s. With the recent AI boom, the meaning has sorta been diluted as people have started using it as an umbrella term.
Unfortunately, there's a ton of information pollution on the subject1) from people who wish to control the narrative, insisting AI is the 'future' so they don't lose absurd amounts of money,2)3)4) and look smart to gullible people, so I'm forced to organize my notes like this.
Categories and roadmap
Generally speaking, there are two categories of AI. The first category is 'weak AI', which encompasses the task-oriented, rules-based symbolic AI, which are actually quite common than you might've realized. The second category is 'strong AI', commonly known as artificial general intelligence (AGI), which encompasses the theoretical end goal of emulating human cognition and the implications that this would have.
Between the categories, you have the belief that machine learning (ML),5) the part where deep learning and neural networks come in, may eventually bridge the gap toward 'strong AI'. This is where a large language model (LLM) in combination with generative AI (GenAI) would allegedly help with the process, but it's currently stuck being expensive tools that may be environmentally harmful.
Afterwards, there is the fringe belief that 'strong AI' would theoretically create artificial consciousness, thus facilitating a singularity where humans and technology become inseparable, with their brain implants, cyborgs, and fully functional androids. On the other hand, this sounds very cult-like if you're able to view the entire thing as a weird doomsday cult with this rebranded form of 'demon summoning'.
Select examples of AI
This section is meant to serve as a brief reminder that AI has technically existed in the past few decades, but rarely drew any criticism aside from its use in content moderation. Most critics and skeptics of AI are usually discussing the negatives of generative AI since its 2020s boom, which has now become this massive blight as AI artwork has been flooding the internet since 2023.
Weak AI
- Chatbots (without LLMs)
- Chess engine (e.g. Stockfish)
- Image scaling (pure algorithms)
Generative AI
- Chatbots (with LLMs)
- Image scaling (neural networks)
Debates and issues
Historically, the field of AI has seen many ethical and philosophical questions, which inspired many works within the science fiction and cyberpunk genres. Much of these questions were forgotten as it wasn't relevant during the AI winter. With the current AI boom, the time has come to revisit these questions in the event that it ever surmounts to anything, so here are a general list of questions to ponder about:
- Will the AI be able to think for itself, feel emotion, and achieve sentience?
- Does the computational theory of 'mind is software' and 'body is hardware' have any merit?
- Does the AI understand what it says, or is it just blindly formulating snarky quips?7)
- How would the AI view being powered off? Does it just die? Does it not register? Does it dream?
- Does it deserve 'robot rights' or 'personhood' status, or will it trivialize human rights?
- Will the AI make ethical decisions? Should the AI hold back answers?
- How does AI view humans in the famed gray goo or paperclip maximizer scenario?
- How do you prevent AI from forming gender biases and racial biases? Should the cops be using it?
- How do you feel about AI-assisted doxing? What if you were the hypothetical target?
- Do you fear technological unemployment? Do you worry about being replaced with a robot?
- How would you feel if AI, regardless of your position, suddenly decided to fire you?9)
- All things considered, what are the consequences of OpenAI discarding their ethics?
- Will the AI advance transhumanism? Is the singularity an inevitability?
- Will it be possible for humans to upload their mind over to a machine using neurotechnology?
- Would you be willing to transfer your mind to a mechanical body? No right or wrong answer.
- Do you understand the risks with letting corporations add more proprietary software to your body?
- What if you had to pay a subscription in order to continue life in a mechanical body?
- Would the environmentalists have any issues with how much power you end up consuming?
- Would it be fun for humanity to just collectively torture a cyberlibertarian billionaire's brain?
Government by algorithm
An algocracy, also known as government by algorithm, is a form of government where the algorithms and artificial intelligence is applied to every aspect of life. This has rather horrific implications if you consider the obvious issue of algorithmic bias (e.g. gender, racial, religious), especially if you're in the minority, then you have transparency as people *will* hide behind machines to avoid repercussions.
In other words, if the internet can barely get digital democracy to work, then what makes you think that an algorithm using that data would do any better? There is also the angle that it goes against the definition of democracy, since an imitation of the people is not the people, which would then cause its handlers to pivot to cyberocracy or something. Some pessimists joke, but they usually don't think this far.
Humiliation of academia
The issue of 'academic integrity' has blown up since educators struggle to differentiate AI and human writing, especially since LLMs inheritly rely on human writing, which it brings us to the situation where an educator utilizes AI-powered 'AI detection' tools, overlooking the irony,14) and end up failing human-written essays for AI. It should be no secret that these tools are faulty and getting worse.15)16)17)18)19)20)
For students, you don't have many options and challenging these accusations can be a hassle. For students attending higher education, you can withdraw if your educator is a genuine dumbass,21) but be warned that doing so might affect your financial aid eligibility or visas. For students that are submitting AI-generated essays, well, you're kinda just cheating yourself and throwing money out the window, but you do you.
Evaluation
Pending.
Notes
- Much of the hype reminds me of the 2010s virtual assistant hype, where commercials misled people into thinking virtual assistants could do 'anything' other than spit results and *maybe* set a timer.22)
- The thought of AI-powered bots intended to manipulate public opinion, typically for personal gains, is a bit funny.27) It's like somebody sat down and thought that the internet didn't have enough demons.
- It should also be noted that AI has this reputation of making companies seem 'cheap', because some companies have attempted to cut costs by refusing to pay the creatives, and it's made even worse when you see a classic case of 'guy who just discovered AI but doesn't realize how uncanny it looks'.
- Recently, there has been a new tech industry trend where people are trying to market 'specialized AI', which is just another way of saying 'weak AI'. I don't want to talk about this any further.
- At the time of this writing, the people arguing against the AI bubble allegedly believe that it isn't due until 2028, comparing ChatGPT's stock to Netscape's stock before the dot-com bubble burst.
External links
- Pivot to AI - News feed on the issues with AI projects.
