Humanity will invariably reach a state of utopia in at most 200 years

You are viewing a single comment's thread.

↑ View this comment's parent

← View full post
Comments ( 2 ) Sort: best | oldest
  • You can't sustain that economic growth if your populous shrinks and becomes stupid. IQ is falling is most developed nations; the Flynn affect has been dead for ages in the West. Flynn himself has literally completed research showing a falling level of IQ in first world countries. And basically any nation that becomes first world can't sustain even a replacement level birth rate. What good is it if third world countries get better if they throttle once they hit a certain stage?

    Self-sufficient, full-automation is likely completely impossible, dude. The data does not point in the direction that full automation is possible or even that a single thing has accomplished it; the data points in the direction that there are gains in technology which have massively increased in the modern world. The the amazon article you showed in absolutely no way shows that those robot janitors are a thing. It straight up says in the article "patents and patent applications don’t guarantee the idea will become reality", and the article you shared about the robotic gut is not self-sufficient and sustaining. It can't create more of itself It can't repair itself. Hell, it can't even find its own food source, it has to be provided with one. It doesn't even have programming remotely concerned with the concept of the basic materials needed to fix itself.

    If you don't force everyone to live in a virtual world full time then you'll have people searching to destroy it just to name a single factor. All you need to do is to get them to hack into a single section of AI and it's all over. How are you going to have a self-sufficient piece of technology when everything's going to shit because they've shut off pretty much any single piece of AI which is required to keep the whole system functioning and your human populous has been catacombed in some AI world without the knowledge to deal with these things for the last 50 years; or a single piece of the code goes wrong, things stop working, leading to more things that stop working, and a populous with not even the knowledge to create a piece of wire is pushed into the real world?

    How are you going to deal with climate issues you never knew about at the time? How are you going to fight off countries that don't want to live in a virtual world and are actually creating arms and weapons to take out the machines, or terrorists; all they need to do is find a single weak spot in the programming and defense system to exploit it. How are you going to keep humans even around as a species to enjoy this future if they're so wrapped up in this brilliant virtual world that they don't care about the real one. And if you artificially breed them, then how are you going to programme into the algorithm the evolution that will happen to those humans when evolution is literally cause by random mutation.

    There's not a single piece of technology that has been made that's self-sufficient and fully-automated, and we're not even close to creating one.

    Comment Hidden ( show )
      -
    • Crap, didn't know about, but it seems like you're right. One hypothesis is that high levels of immigration from less developed countries may have been a factor, but it seems like reverse Flynn effect is real. Increased levels of pollution and decreased amount of fresh air due to increased use of computers and gaming are another possible factor. Still, while this isn't great, this certainly won't impact the rate of technological innovation that harshly, given 1) that the decline is nowhere near as sharp as the increase seen during the Flynn effect era and 2) most innovation is produced by a very small group of individuals at the very end of the curve, whereby most of the variation is the result of randomness, and any changes caused by recent and likely future declines in IQ fall well within the scope of noise.

      Third world countries getting better is better than them not developing at all, even if they eventually throttle. First-world nations will only stop being first-world because they will be economically and technologically surpassed by other nations. Sure, we may lose loads of potential innovators from these countries, but we will gain loads more from the newly-developed nations. So, in my opinion, nothing to worry about.

      If it's full-on completely impossible, what is cut-off point? At which point will technological progress, which has been up and running in an exponential trajectory since the dawn of mankind, suddenly halt? Is there some fundamental law of physics that says "full automation is impossible"? I'd like to hear your response on this one. People like to label everything as "impossible" these days, and such people usually get proven wrong within a couple of decades. Aeroplanes, space rockets, commercial personal computers, etc. I don't and probably won't ever understand the rationale behind calling things to which there are clear tendencies and which are clearly physically possible "impossible". I understand "not feasible in the near future" or, at worst, even "something humanity won't ever reach" for some apocalyptic reasons, but "impossible"? I don't think I will ever get that.

      Okay, perhaps the article I sent you wasn't updated, but the point is that many companies rely on technologies such as the one in the link already (e.g. https://www.nanalyze.com/2018/12/robot-janitors-commercial-floor-cleaning/). Yes, I am aware that these aren't perfect janitors in that they are incapable of, say, picking up larger objects, but, as I have already explained, this is all coming in the near future.

      The gut robot is actually fully self-sustaining: if it were released into a sewer, it would be able to survive all by itself. And, as for self-reparation, that is far too advanced of a process to be feasible today. There is a reason that mechanical engineers are some of the most in-demand professions there are currently, and they are probably going to be some of the last people to lose their jobs. But, if we are talking about self-sustainability and self-sufficiency, we already have that.

      You see, when you are dealing with AI more intelligent than humans specialised exclusively in one field - security - it becomes next to impossible for a human to hack anything guarded by it. I think something like that may or may not be a problem in the early days of the existence of consciousness simulation, but as soon as AI become better than humans at security, there is no chance. Things not working is a more serious problem, but I think that, as long as a large enough number of AI programmes written using different methods is there to watch over any potential errors in the code, we should, most likely, be fine. If one AI doesn't see the error, or two, or a thousand, one out of a million probably will (recall that, by this point, AI will be many orders of magnitude more skilled at writing code - as well as detecting imperfections - than humans).

      As I said, politics will become extinct. "Country" will become synonymous with "culture". There won't presidents or armies - there simply won't be a need for them (I already explained this bit). Also, if people don't trust this virtual world, no one is forcing them to go inside. Terrorists are a bigger issue; honestly, I don't see a perfect solution at the moment, but things like that tend to solve themselves naturally. I reckon the same kinds of questions were being asked when the first nuclear bomb was being developed, and yet here we are, 80 years later, still alive and well. I don't know what kinds of technologies will exist in 200 years' time, which planets humanity will have populated, and just the general state of affairs, but seemingly serious concerns existed for every type of revolutionary technology, and basically all of them were solved when the technology was implemented. But it's a valid concern.

      Finally, humans won't need to care about the "real world". The virtual world will be their "real world". And if you are referring to the biological species "homo sapiens", I don't see why it's important to preserve it. If you want to think of it this way, they will have evolved into a different, non-biological species. I don't see anything wrong with that. The chances are, human curiosity will take over, and many will still want to explore the depths of "real-world" space. Whether they will do it in their natural, biological form or some other form isn't, in my opinion, an important question.

      Comment Hidden ( show )