Nameless Rumia's Wiki

I'm tired of the internet!

User Tools

Site Tools


artificial_intelligence

Artificial Intelligence

Artificial Intelligence (AI) is a field of computer science that intends to let machines and programs emulate human cognition, a concept that has existed since the 1950s. With the recent AI boom, the meaning has been diluted, becoming both a marketing buzzword and an umbrella term.

Due to the immense amount of AI washing and information pollution on the subject, distracting gullible investors from the looming threat of an AI bubble1)2)3) and the general 'growth at all costs' mindset, I am forced to organize my horrible notes like this.

Categories and roadmap

Generally speaking, there are two categories of AI. The first category is 'weak AI' or 'specialized AI', which encompasses the task-oriented, rules-based symbolic AI that are more common than you'd think. The second category is 'strong AI', commonly known as artificial general intelligence (AGI), which encompasses the theoretical end goal of emulating human cognition and the implications that this would have.

Between the categories, you have the belief that machine learning (ML),4) the part where deep learning and neural networks come in, may eventually bridge the gap toward 'strong AI'. This is where a large language model (LLM) in combination with generative AI (GenAI) would allegedly help with the process, but it's currently just expensive e-waste and a source of debt.

Afterwards, there is the fringe belief that 'strong AI' would theoretically create artificial consciousness, thus facilitating a singularity where humans and technology become inseparable, with their brain implants, cyborgs, and fully functional androids. On the other hand, this sounds very cult-like if you're able to view the entire thing as a weird doomsday cult with this rebranded form of 'demon summoning'.

Select examples of AI

This section is meant to serve as a brief reminder that AI has 'technically existed' for decades, but rarely drew criticism unless it was about content moderation. Since the 2020 boom, most critics and skeptics of AI have started tearing into 'generative AI' as it has become a massive blight since 2023, but it's important to be able to discern the two so your overall judgement can be more sound.

Debates and issues

Historically, the field of AI has seen many ethical and philosophical questions, which inspired many works within the science fiction and cyberpunk genres. Much of these questions were forgotten as it wasn't relevant during the AI winter. With the current AI boom, the time has come to revisit these questions in the event that it ever surmounts to anything, so here are a general list of questions to ponder about:

  1. Should the AI ever think, feel, and achieve sentience? What are the implications?
    1. Does the computational theory of 'mind is software' and 'body is hardware' have any merit?
    2. How would the AI view being powered off? Does it just die, not register, or dream?
    3. Do you believe in 'robot rights', or would the proposal further trivialize or undermine human rights?
  2. Should the AI ever hallucinate or lie? Should the AI potentially get people killed?
    1. How would you know if the AI is only saying what it thinks you want to hear?
      1. Consider how the gradual conditioning could potentially lead to a long-term response bias.
      2. Consider how its 'friendly' tone could fuel AI psychosis, which has killed people.
    2. How does AI solve the trolley problem? Whose lives do self-driving cars value and why?6)
    3. How do you feel about AI-assisted doxing? You must consider law enforcement and vigilantism, as both use cases are the same, and the non-zero chance it can be used against you.
    4. Do you fear technological unemployment? Consider a scenario where AI allegedly suggests firing you, then consider a scenario where AI keeps rejecting your job applications.7)8)9)10)
    5. In the famed 'gray goo' or 'paperclip maximizer' scenarios, how would you ensure that the AI doesn't start dehumanizing humans for being 'in the way' and dissect humans for parts?
  3. Could the AI advance transhumanism? Is the singularity an inevitability?
    1. Do you believe in neurotechnology (e.g. brain implants)? Do you understand the risks with letting companies add proprietary software to your body, knowing the company could go under?11)
    2. Do you believe that humans will be able to 'upload their mind', then move that to a machine?
      1. How would you know if the end result is 'accurate' or 'human-like', when it could end up being a mere imitation of a human that has been flanderized into a flat character?12)
      2. How much power would being a robot even consume? Would this require a subscription?
        1. Would they place you back into the capitalist machine, making you work in the digital afterlife?
        2. Do you suppose that environmentalists and 'robot rights' advocates will clash over that?
      3. What consequences would deathbots have on the whole grieving process?13)14)15)
    3. Wouldn't it be fun if humanity could just collectively torture some cyberlibertarian billionaire's brain?

Government by algorithm

An algocracy, also known as government by algorithm, is a form of government where the algorithms and artificial intelligence is applied to every aspect of life. This has rather horrific implications if you consider the obvious issue of algorithmic bias (e.g. gender, racial, religious), especially if you're in the minority, as well as transparency since people *will* hide behind machines to avoid repercussions.

In other words, if the internet can barely get digital democracy to work, then what makes you think that an algorithm using that data would do any better? There is also the angle that it goes against the definition of democracy, since an imitation of the people is not the people, which would then cause its handlers to pivot to cyberocracy or something. Pessimists may joke, but they usually don't think this far ahead.

Humiliation of academia

The issue of 'academic integrity' has blown up since educators struggle to differentiate AI and human writing, especially since LLMs inheritly rely on human writing, which it brings us to the situation where an educator utilizes AI-powered 'AI detection' tools, overlooking the irony,16) and end up failing human-written essays for AI. It should be no secret that these tools are faulty and getting worse.17)18)19)20)21)22)

For students, you don't have many options and challenging these accusations can be a hassle. For students attending higher education, you can withdraw if your educator is a genuine dumbass,23) but be warned that doing so might affect your financial aid eligibility or visas. For students that are submitting AI-generated essays, well, you're kinda just cheating yourself and throwing money out the window, but you do you.

Evaluation

TL;DR: Some positives, vast negatives. Note the AI pushback.24)25)26)27)

Notes

  • In a way, the AI hype reminds me of the virtual assistant hype of the 2010s, where the marketing convinced people that virtual assistants could do literally 'anything'. In reality, these were only good for spitting out search results and setting timers, until you have to say 'five zero minute timer'.
  • Generative AI chatbots are bad at complicated math problems,28)29) assuming that the question isn't rehashed from some overpriced textbook,30) and it's known to be bad at chess.31)
  • The thought of AI-powered bots intended to manipulate public opinion, typically for personal gains, is a bit funny.32) It's like somebody sat down and thought that the internet didn't have enough demons.
  • It should also be noted that AI has this reputation of making companies seem 'cheap', because some companies have attempted to cut costs by refusing to pay the creatives, and it's made even worse when you see a classic case of 'guy who just discovered AI but doesn't realize how uncanny it looks'.
  • At the time of this writing, the people arguing against the AI bubble allegedly believe that it isn't due until 2028, comparing ChatGPT's stock to Netscape's stock before the dot-com bubble burst.
  • If it helps break the illusion, it may be worth mentioning that AI is not 'autonomous' or 'self-sufficient' as it requires humans to function, especially when the GPT changes versions every other month.
  • Pivot to AI - News feed on the issues with AI projects.
1)
"Just How Bad Would an AI Bubble Be?" (September 7, 2025). The Atlantic.
2)
"America is now one big bet on AI" (October 6, 2025). Financial Times.
4)
Not to be confused with Marxism–Leninism or any other ML acronym for that matter.
5)
If you have a 'beginner-level' interest in self-driving cars, I would suggest reading about the advanced driver-assistance system (ADAS) and the SAE J3016 standard to find what automobile manufacturers have been working on their own self-driving cars. In other words, anything but a Tesla, for fuck's sake.
12)
There was a short-lived trend of AI-generated chatbots who pretend to be famous celebrities or make an attempt to roleplay as completely random people based on their social profile, but I couldn't help but notice that the end result tends to produce a flanderization or a mockery of the person in question.
16)
What is the logic of using an AI-powered 'AI detection' tool to detect AI writing, but refusing to question, at any point, whether the AI is giving you what it *thinks* it wants you to hear? This will only place more stress on students that genuinely did the work, while encouraging students to fall for those 'anti-AI detection' scams.
21)
"The case against AI detectors" (September 30, 2024). The University of Iowa.
23)
I've had the displeasure of dealing with an 'educator' that 'teaches' an online class, yet they don't have any office hours, never replies to emails, and hands out grades long after the fact as the course hands out five assignments per week, *when other courses ask for far less*, and it's clearly modeled after an old version of an overpriced academic textbook. Just remember Rate My Professors and opt to pick courses yourself.
29)
"Why is ChatGPT so bad at math?" (October 2, 2024). TechCrunch.
30)
"A.I. Can Write Poetry, but It Struggles With Math" (July 23, 2024). The New York Times.
artificial_intelligence.txt · Last modified: by namelessrumia