Sources of existential risk

Author: Tejretics ,

Posts

Total: 31
Tejretics
Tejretics's avatar
Debates: 7
Posts: 462
1
4
8
Tejretics's avatar
Tejretics
1
4
8
I do not understand how y’all have necroposted a thread from four years ago back into existence.
Lemming
Lemming's avatar
Debates: 2
Posts: 2,256
4
3
10
Lemming's avatar
Lemming
4
3
10
--> @Tejretics
Sometimes I think a category has gone too many days without activity,
So I read some old threads,
And find my mind sparked at times, of opinions, curiosity.
WyIted
WyIted's avatar
Debates: 3
Posts: 659
3
4
6
WyIted's avatar
WyIted
3
4
6
--> @Tejretics
It's certainly higher than 1%. The entire reason for the Fermi paradox is there is some very dangerous things that civilizations face. 

AI is an existential threat, and not just for the obvious reason it could replace us. Google the grey goo hypothesis for just one example. Nuclear threats exist, global warming. The sun could literally explode and kill us at any second. The biggest risks come from ourselves for the most part.

Outside of ourselves, AI and natural occurrences like killer asteroids, we have the fact that the reason the universe is quiet might be because of a hypothetical thing like a roaming civilization that seeks out and destroys any civilization just before or at the moment it reaches a technological singularity.

We are constantly having researchers try to reach out to alien civilizations in the off chance a signal hits one, but they aren't considering that there could be a very good reason other civilizations have STFU. That reason could be a serious existential planet destroying threat. 

The closer we get to a technological singularity, the closer we approach a planet destroying event. 

One answer to the Fermi paradox is that we are a simulated universe, which could mean that if we approach a technological singularity it could cause us to use so much of the simulations computer resources we kill ourselves.

The threat to the existence of intelligent life on this planet is very severe, very real and very likely. We should proceed cautiously. 
K_Michael
K_Michael's avatar
Debates: 30
Posts: 556
3
4
10
K_Michael's avatar
K_Michael
3
4
10
Commonly posed existential risks applicable to the next 200 years

1. AI/Singularity
2. Climate change
3. Nuclear apocalypse
4. Outside risks (i.e., aliens)
5. Disease, especially engineered

It's up for debate how much we can do about 2.
1. and 3. are easy in theory; don't build AIs or Nukes. Unfortunately international conflicts incentivize defection because of the advantages afforded by development of the same.
4. is similar to 1. By the time we realize it's a problem, there will likely be nothing we can do about them.

Others risks
nanobots: essentially the Grey goo hypothesis mentioned by Wylted, but it doesn't really require any intelligence behind it, so I count this separate from AI as it could happen independently.
other outside risks:
* type 1, one's we can potentially manage, such as a meteor crash course
* type 2, ones we can't, i.e nearby supernovas

ebuc
ebuc's avatar
Debates: 0
Posts: 2,822
3
2
4
ebuc's avatar
ebuc
3
2
4
--> @K_Michael
Volcanoes, 5 kinds in varying degrees of serverity, and

.........in kinds of gaseus substances the release and other particles to block out sun
K_Michael
K_Michael's avatar
Debates: 30
Posts: 556
3
4
10
K_Michael's avatar
K_Michael
3
4
10
--> @ebuc
While volcanoes and especially "super-eruptions" can have drastic effects on global weather patterns, I don't think they are sufficient to pose an existential risk, with little-to-no chance of human survival. Mass starving as crop yields drop, harsh winters, a corresponding hit to the economy, yes, but not extinction.