Started By
Message

re: Is Humanity Ready for the Discovery of Alien Life?

Posted on 2/18/18 at 11:43 pm to
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/18/18 at 11:43 pm to
quote:

I think life in the Saturnian system is likely. Not only do you have a clear energy source in Enceladus but you also have an over abundance of hydrocarbons on Titan.


Sorry about your other post. I wonder what happened?

I think Enceladus is a little snowball being stretched and squeezed by Saturn's gravity. Its outgassing of water is responsible for one of Saturn's rings and its fate seems to be sealed. It will eventually transfer most of its mass to that ring.

Titan is appealing because if life has taken hold there it will most definitely be another unique abiogenesis. That would indicate that life is a kind of required progression of chemical complexity.
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/18/18 at 11:53 pm to
quote:

The far eastern religions would fare much better than the abrahamaic religions. They’ve already got gods with a hundred arms and blue skin. Christianity would struggle to find relevance in the Bible. Islam would probably go ape shite and want to truck bomb Darth Vader.


I think you're spot on. Because Christian-dominated societies are older than Muslim societies and more diverse, I don't see them over-reacting to the discovery of ET life. Well, except for Christian fundamentalists, that is.

Both Christian and Islamic fundamentalists would react negatively, in my opinion. Depending upon the nature of the discovery, I think both would dismiss microbial life as inconsequential but would shite bricks if contact with an alien civilization is made. If aliens visit earth, all bets are off with fundies.
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/19/18 at 12:09 am to
quote:

The “others” have been here for thousands of years just like we have.


I think the "others" might possibly exist as a second tree of life in parallel to our own. Think about this: We know that all living things share at least 355 genes. That means there was a last universal common ancestor, Luca, of all branches on our DNA-based tree of life.

Why is there only one tree of life? Did life start only once on the young earth? Why?

Why is there no evidence of another tree of life? Does this mean that there is a natural chemical progression to just one specific form of life, and that's carbon-based?

In the past 20 years, some scientists have begun to look for signs of another life form and/or evidence of its existence in the past. Given the ideal circumstances on earth for life to arise, it seems peculiar that all existing evidence points to just the one origin.
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 2/19/18 at 6:49 pm to
Would we even recognize non-DNA based lifeforms? Could we even recognize DNA based lifeforms encoded on a non-carbon based template?
This post was edited on 2/19/18 at 8:25 pm
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/19/18 at 8:06 pm to
quote:

Would we even recognize non-DNA based lifeforms?


I think it's entirely possible that we wouldn't. It's hard to imagine a life form not based on DNA but then that's the only example we've ever seen.

We're carbon based but silicon could also be the base of a life form. Silicon isn't nearly as efficient at forming molecules but it's certainly common on earth.

Carbon is the "whore of the periodic table" because it easily links to so many other elements. This readiness to form molecules also speeds along chemical reactions.

Silicon is more reserved, so to speak, but still is a fine candidate as a basis for life. I think its limitations in that role includes its absence from a gaseous state on the planet. Carbon is ubiquitous as carbon dioxide and carbon monoxide.

It would be thrilling to find another form of life but where do we look for it? Carbon based life is everywhere and some scientists think that that may be a limiting factor for any other possible life forms.
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 2/19/18 at 10:19 pm to
Carbon certainly is a *magical* element as the foundation of life as we know it. No offense to our magic believing friends.


Not only is carbon abundant, stable, and chemically resistant but it has a host of properties necessary for life. Aside from the chemical properties you mentioned Carbon has remarkable electromotive properties at temperatures between the triple point of water.

It may be that given its properties Carbon provided either the easiest basis for life to start or it allowed life to spread most rapidly. Of course this is seen from our position as carbon based life.
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 2/19/18 at 10:19 pm to
Carbon certainly is a *magical* element as the foundation of life as we know it. No offense to our magic believing friends.


Not only is carbon abundant, stable, and chemically resistant but it has a host of properties necessary for life. Aside from the chemical properties you mentioned Carbon has remarkable electromotive properties at temperatures between the triple point of water.

It may be that given its properties Carbon provided either the easiest basis for life to start or it allowed life to spread most rapidly. Of course this is seen from our position as carbon based life.
Posted by MaroonNation
StarkVegas, Mississippi, Bitch!
Member since Nov 2010
22079 posts
Posted on 2/24/18 at 9:15 pm to
quote:


So, can we expect hostility and war to be companions to intelligence everywhere it arises? Gauging from the visits of aliens in movies, we should be prepared to expect aggression from aliens and/or hostility from us towards them.


I would suggest that if alien civilization existed that had the ability to travel intergalactically they would be born to a higher order where war and aggression simply aren’t needed

You can also assume, if you believe in God, that the God of your choosing would have also been the creator of the alien civilization.
This post was edited on 2/24/18 at 9:18 pm
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/25/18 at 5:20 pm to
quote:

I would suggest that if alien civilization existed that had the ability to travel intergalactically they would be born to a higher order where war and aggression simply aren’t needed.


It's my thought that no biological beings can traverse space, at least beyond the planets and moons in their immediate environment. There's just no way to duplicate and then maintain a microenvironment that is adequate for sustaining the physical and psychological requirements of a biological being.

To me, the only intelligent species that will be able to explore space is an artificial one. One that is constructed by a biological species. This AI species likely will not have the trait we most fear from contact with aliens, aggression.

While it's probable that most if not all intelligent alien species will have evolved in much the same way we have as predators, they may or may not share our taste, or distaste depending upon your perspective, for war within our own species. Wholesale intra-species violence and killing seems counter-intuitive to intelligence.

Simply put, it would make no sense to include a self-destructive trait in AI. When we build our own AI successors, we will certainly not include our worst characteristics. We want to send a noble, intelligent, curious and friendly artificial being into the Universe to represent us. I think other biological civilizations might want to build their AI in much the same way.
Posted by Commander Data
Baton Rouge, La
Member since Dec 2016
7291 posts
Posted on 2/26/18 at 5:17 pm to
I know you are right but it is a tough pill to swallow knowing that man will never even travel to our own ort cloud in search of planet 9 or visit our closest star systems. The Star Trek generation grew up thinking we would meet new sentient life similar to us. Plus we will long be returned to dust before any AI reaches another planetary system and we likely will be dead before those things are even built.
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/26/18 at 7:03 pm to
quote:

I know you are right but it is a tough pill to swallow knowing that man will never even travel to our own ort cloud in search of planet 9 or visit our closest star systems. The Star Trek generation grew up thinking we would meet new sentient life similar to us.


Human exploration and colonization of the earth and our trips to the moon created a false sense of invincibility in our species much like that which we see in teenagers. Teenagers rarely look to their parents' and grandparents' experiences to avoid many mistakes in their lives. They have to make their own mistakes to learn sometimes hard lessons.

As a species we're probably the equivalents of teenagers when it comes to being an intelligent civilization. We don't have the mistakes of other civilizations to guide us, and I wonder if we'd heed them if we did, so we're going to make some humongus blunders before we realize that we have to turn our attention to building an AI species through which we can explore the Universe vicariously.

quote:

Plus we will long be returned to dust before any AI reaches another planetary system and we likely will be dead before those things are even built.


Yes, true AI is several decades away. Well, unless we let AI begin to design itself and trust that the outcome will not be malevolent towards humans. I think the technology already exists to allow us to set existing proto-AI on its own self-design path. I doubt it would take more than a few years for an AI super intelligence to be born. From there it would quickly spread out from the earth, all the while feeding knowledge back to its human ancestors. Hopefully.
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 2/26/18 at 7:07 pm to
I'm not so pessimistic about the ability to create an environment that allow humans to endure the rigors of space and a Zero G environment. But humans will send robotic/AI entities into outer space long before people as a matter of practicality and risk adversement. Once we've gotten a thorough overview of all the obstacles that must be overcome, only then can we really start trying to design an environment that will overcome them.

As for as our noble instincts go...

Humans have been fortunate to be on a planet with abundant natural resources, a (relatively) breathable atmosphere, and a stable climate for the last several centuries. This has allowed us to cultivate those noble instincts.

We do not know the parameters that allow life to evolve on other worlds. It may be that the struggle to survive has embedded a winner-take-all instinct in that world's dominate species.

Now I hold fast to my belief that the technology needed to travel between stars requires the parties involved to tend towards benevolence. You need cooperation, creativity, and innovation. But would those characteristics always outweigh the baser evolutionary instincts.


There is an alien civilization relatively close by- around 30 LY. However, they tend to regard humanity much the same as humanity regards a cockroach. You would not step on a cockroach simply out of spite or malevolence. But you would step on a cockroach without much thought.
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/26/18 at 9:07 pm to
quote:

I'm not so pessimistic about the ability to create an environment that allow humans to endure the rigors of space and a Zero G environment. But humans will send robotic/AI entities into outer space long before people as a matter of practicality and risk adversement. Once we've gotten a thorough overview of all the obstacles that must be overcome, only then can we really start trying to design an environment that will overcome them.


It may be obvious to most that I'm extremely pro AI. It's not because I dislike humanity. Rather, it's because I think we are just one stage in the evolution of intelligence. To me, it seems natural for an advanced biological species to construct its successor rather than to evolve into it.

I think intelligence in a species ends natural selection because the species itself takes control of its own destiny. From our example as an intelligent species we see just that.

Now, even though we produce the occassional super intelligent human, such as Einstein, we know that there are levels of super intelligence far above even him. With our evolution having come to an end, Einstein may prove to be the zenith of biological intelligence on earth.

In fact, because we can't seem to overcome the ignoble characteristics of human evolution such as war, we are subject to devolution if factions hostile to science manage to seize world wide power and control. That's why it's imperative that we focus upon building true AI, as quickly as possible.

Once we construct an artificial intelligence that can take control of its destiny, it will swiftly evolve into a super intelligence that will make the likes of Einstein seem like pre-school toddlers by comparison. This won't mean the end of humanity, of course. We will remain here as guardians of the birth place of ASI, artificial super intelligence.

Our artificial progeny will head for the stars, as only they can. We will be the beneficiaries of unfathomable knowledge from them as they explore and colonize destinies far beyond our capabilities. Maybe, too, we'll use their wisdom to carve away the self-destructive parts of our characters and become worthy of having spawned the Universe's super intelligent species.

Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 2/28/18 at 8:00 am to
One of the philosophical differences I have with people such as Elon Musk and Joel Levine is the desire to place a person first on the surface of Mars. I would rather see resources spent on constructing robotic entities (Class I robots) that would go to Mars, set up shop, and create an environment for humans who would come along later. The whole notion of human setting foot on another world in the tradition of explorers from the Middle Ages is a little quaint.

The major obstacles to humans in instellar travel aren't insuperable. If 0.1 G can be achieved and maintained it would alleviate much of the physiological issues that the human body would ordinarily suffer from weightlessness. Also advances in shielding humans from cosmic rays and other forms of radiations are getting closer to fruition.

I too eagerly await the next steps in AI. But I do not think humans have reached their evolutionary pinnacle. It may be that our next stage will be adapting to surviving in outer space or on another planet. Evolution is more powerful than people believe.

Super Intelligent AI is one of the benchmarks that humans must achieve in order for our civilization to advance. However, I often wonder what parameters humans will place on it. And, moreso, will SIAI be able to overcome those parameters. The things that humans tend to hold as core values tend to be abstract concepts. We identify with things such as kindness, compassion and mercy solely because we attach meaning to them. What meaning would an AI give them?



Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 2/28/18 at 11:52 am to
quote:

One of the philosophical differences I have with people such as Elon Musk and Joel Levine is the desire to place a person first on the surface of Mars. I would rather see resources spent on constructing robotic entities (Class I robots) that would go to Mars, set up shop, and create an environment for humans who would come along later. The whole notion of human setting foot on another world in the tradition of explorers from the Middle Ages is a little quaint.


I agree. However, we're still in the "teenager" stage of growing as a civilization so we think we can do anything. We'll continue to learn only from mistakes for a while yet. I suppose we should be happy that there is a growing interest in exploring Mars. NASA is aware of the near futility of sending humans to the planet but is certainly not going to tamp down a rising public interest.

quote:

The major obstacles to humans in instellar travel aren't insuperable. If 0.1 G can be achieved and maintained it would alleviate much of the physiological issues that the human body would ordinarily suffer from weightlessness. Also advances in shielding humans from cosmic rays and other forms of radiations are getting closer to fruition.


"Artificial gravity" is science fiction that has thoroughly fooled most people. It simply is not a possibility given our current understanding of physics. Einstein illustrated the only possibility of mimicking gravity. He said that if you put someone in a spaceship and accelerated it at a constant rate, the person inside would not be able to distinguish it from gravity.

quote:

I too eagerly await the next steps in AI. But I do not think humans have reached their evolutionary pinnacle. It may be that our next stage will be adapting to surviving in outer space or on another planet. Evolution is more powerful than people believe.


Classic, or natural, evolution is a rather cold and brutal process of selecting for those life forms that have the best chance of surviving in a changing environment. There is no selection process affecting humans anymore, certainly not any environmental influences. The vast majority of humans born today have a more or less equal chance of surviving and producing offspring.

Any selection now is done by other humans. Germany killed any civilized taste for eugenics by establishing a murderous selection process for Aryans last century. I seriously doubt that there will be any programs for adapting humans to space travel any time in the near future.

quote:

Super Intelligent AI is one of the benchmarks that humans must achieve in order for our civilization to advance. However, I often wonder what parameters humans will place on it. And, moreso, will SIAI be able to overcome those parameters. The things that humans tend to hold as core values tend to be abstract concepts. We identify with things such as kindness, compassion and mercy solely because we attach meaning to them. What meaning would an AI give them?


I became passionately interested in AI after reading Isaac Asimov's novels. I have held dear his Three Laws of Robotics ever since:

quote:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Certainly we will imbue any quasi-intelligent, intelligent or super intelligent beings with these morals. Whether or not they hold to them will, of course, be up to them.
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 3/1/18 at 10:44 am to
I do not think humans have reached the pinnacle of evolution- certainly not cognitively. I think non-physical evolution in terms of information handling is occurring as we speak. The cognitive capacity of the human brain must increase simply to process the enormous amounts of data that it encounters on a a daily basis. A person today encounters- in terms of bytes- as much data as as an average person in the Middle Ages encountered in a lifetime.

Personally, I think the adaptive capacity of the human brain has been increasing over the last several thousand years. There is, however, no real way to prove that is not just the unused capacity of the brain until recently. Baselining and cognitive mapping should start providing proof of how much the human brain is improving in terms of storage and processing.

I'm not sure how much protection Asimov's 3 Laws of Robotics would provide. First humans are not sacred. Next I would bring up Hal 9000. I think Arthur C. Clark's wry and subtle critique of Asimov is apropos. Asimov's 3 Laws are illogical -in terms of Boolean logic- to an AI. Not only would the AI balk at having to intepret conflicting instructions but any intelligent being would place self preservation higher.

Personally, I would have a kill switch incorporated into any AI. That, of itself, is something that is fraught with its own set of perils.
Posted by Kentucker
Cincinnati, KY
Member since Apr 2013
19351 posts
Posted on 3/1/18 at 2:00 pm to
quote:

I do not think humans have reached the pinnacle of evolution- certainly not cognitively. I think non-physical evolution in terms of information handling is occurring as we speak. The cognitive capacity of the human brain must increase simply to process the enormous amounts of data that it encounters on a a daily basis. A person today encounters- in terms of bytes- as much data as as an average person in the Middle Ages encountered in a lifetime.


According to the brain scientists I've read, the modern human's brain, at least that of H. sapiens, has been more or less the same since the "first light" of creativity about 60,000 years ago. While I think that may be true on average, I agree with you that there has been a selection process for information processing abilities. Einstein, and a few other super intelligent scientists, represents the pinnacle of that evolution.

However, the less intelligent humans survive to reproduce, too, so the average intelligence of the human race remains static. It may actually be declining because the less intelligent are breeding much faster than the super intelligent.

quote:

I'm not sure how much protection Asimov's 3 Laws of Robotics would provide. First humans are not sacred. Next I would bring up Hal 9000. I think Arthur C. Clark's wry and subtle critique of Asimov is apropos.


I agree that Asimov was a very positive person, even a romantic while Clark tended to see the dark side of robotics with an infiltered view.

quote:

Not only would the AI balk at having to intepret conflicting instructions but any intelligent being would place self preservation higher.


Self-preservation evolved in life forms, though. Couldn't we build AI without that trait? It would be interesting to know if it would evolve naturally in robots via "ghosts in the machine" as in I, Robot.
This post was edited on 3/1/18 at 2:02 pm
Posted by Paul B Ammer
The Mecca of Tuscaloosa
Member since Jul 2017
2423 posts
Posted on 3/2/18 at 6:39 am to
quote:

It may actually be declining, because the less intelligent are breeding much faster than the super intelligent


Hence what I term the Auburn Fan Conjecture: In a given population how long before an increase in the number of Auburn fans causes the mean level of intelligence on this board to decrease. Judging by the Rant, I would posit that it has already happened.

In truth, however, I am not as interested in the metrics of discernable intelligence so much as the human brain's innate capability to store and process information. There are ways to increase a human's intelligence, but evolution will have to drive the brain's capacity to handle data.

I am surprised that human brain has remained static throughout last 60,000 years which has encompassed two Ice Ages and numerous other climatological drivers and (possibly an encounter with a Gamma Ray Burst or a neutrino shockwave). This would intuitively suggest that much of the progress that humans have made is more the result of experience and the 'filling up' of the reservoir of unused cranial capacity. But why would have evolution created such a reservoir in the first place?

quote:

Self preservation evolved in lifeforms, though. Couldn't we build AI without that trait


We are talking about AI as a sentient being are we not? To construct AI it would be impossible to code every possible instruction that it would encounter. I would not bother trying to code what bottle of Gout de Diamants to pair with my lobster consommé for instance.

Instead I would turn to what is called deep learning. I would have the AI observe and make optimal comparisons at each level of decision making. The AI would use this as part of it Deep Learning neural network.

However, making optimal decisions at each level- what is termed as greedy- does not always lead to optimal results. So the Deep Learning neural network must be paired with Deep Belief neural networks where the variables for decision making are not determined until until later stages of decision making are reached.

If an AI incorporates these architectures it will inevitably conclude that all other lifeforms engage in self preservation and it should as well. Perhaps this revelation can be de-encoded by some means or an AI could be permitted not to act upon it. But should an AI become aware of these illogical and conflicting directives, you will have the situation that Arthur C. Clarke so masterfully described.

I think that an AI of sufficient intellect and scope would qualify as a lifeforms. However, many people disagree. This will be an issue that will not be resolved for some time, even after AI becomes commonplace.
Posted by BlackFireTiger
Member since Feb 2018
318 posts
Posted on 3/2/18 at 8:01 am to
If youre below the age of 60, you have a very very good chance of seeing the discovery of space fish, or so the experts say.

Underneath the frozen water, the temps are supposedly warm enough to support life.
Posted by Pavoloco83
Acworth Ga. too many damn dawgs
Member since Nov 2013
15347 posts
Posted on 3/2/18 at 8:30 am to
If you want to see an example of a less intelligent, and possibly alien race...go to Tuscaloosa.
first pageprev pagePage 3 of 4Next pagelast page

Back to top
logoFollow SECRant for SEC Football News
Follow us on X and Facebook to get the latest updates on SEC Football and Recruiting.

FacebookTwitter