Clever people make mistakes as easily as anyone else. History also shows that a team of good advisers doesn’t make you immune from mistakes either. And I hope never to see a world where people are afraid of asking challenging questions of someone else because they feel they are not as clever themselves.
The US seems to have been remarkably successful with their landers program, only 1 failure out of 8 missions. But – none of them were guaranteed successes. For instance, look at the last 4 missions by the US, all successes. Suppose it really was a 50 / 50 chance of success each time for those missions. Then chance of getting 4 successes in a row is 6.25% or 1/16. Such odds do happen, not too extraordinary. I think is too soon really to be sure that the US has achieved better than 50/50 reliability for its landers, though most likely it has (especially since with each mission they learn more through telemetry etc).
It is just a tricky business, sending spacecraft into space, so far anyway, and we don’t have the numbers of flights and the history to achieve the same reliability we have for cars or for aircraft.
With Elon Musk, then I don’t know about his technology, whether he can achieve it or not, that is his speciality. But he has had failures and explosions of his spacecraft, as have almost all spacecraft developers at some point or other. It certainly looks promising for unmanned cargo transport for the ISS already used for that. Whether he can achieve the reliability needed for human transport – perhaps – but I wait to see there. Is hard to beat the Soyuz system with its multiple fail-safes proven over many more flights than any other system.
But whether or not he achieves the reliability needed for human spaceflight – he doesn’t seem to have investigating human factors as a priority for interplanetary missions – and he hasn’t said anything about his plans for preventing forward contamination of Mars or how to fit in with Planetary protection.
That question needs to be answered, if you are serious about landing on Mars. And until someone does answer it, and they do that in a way that is generally agreed to be a solution, then there is a big ? over the whole thing, for anyone who cares about planetary protection (as many do at least).
After all, how do they protect against a hard landing? What is their target probability of contaminating Mars in event of a human crash landing? Or overall? Presumably higher than the 0.01% target probability for a robot. How high is acceptable for humans?
It’s no good just saying “Elon Musk is clever so no point in asking that question”. I hope we never get to the state where people are afraid of asking someone else challenging questions because they think they are not as clever as them.
E.g. just for example, suppose they calculated it as 1% of contaminating Mars per human mission, and say 10% over all during exploration phase – is that acceptable to other parties to the OST such as ESA, China, Russia, etc.
These, and many other questions need to be answered. And basically, we don’t know enough to answer such questions yet, plus is mixed in with other questions that need to be scrutinized on the international level about what are acceptable probabilities of contaminating Mars, would be discussed by COSPAR, not NASA.
I don’t see how the process can end any differently from the earlier ones, myself, as we haven’t got that much more information to go on. What info we do have since the earlier studies points towards it being harder to do planetary protection rather than easier, if anything. Best it can do is to outline the areas where more research is needed in some more detail perhaps, is my forecast for what they will do, for what it is worth :).
And as for human factors, then – nobody has yet demonstrated long term closed systems in space – and nobody knows what gravity prescription is needed for human health. So again cleverness just doesn’t enter into that. Some things you can only find out by experiment, especially things to do with ecosystems and human body. Plants, and humans are far too complex to model on a computer.
Now it’s not that there is a direct causal connection making smarter humans unattractive or attractive humans less smart. But the first problem is that you are looking for features that are both rare. When, for example, you consider that 10% of the population are very attractive and 10% very smart, that leaves you – given that there is no common factor in both features – with just 1% that are very attractive and very smart. This also means that 90% of all very attractive people will not be very smart and 90% of all very smart people will not be very attractive, which can cause the perception that “smarter women are unattractive”.