Mailbag: Time dilation in Schroeder’s biblical cosmology

LH asked for clarification on the biblical cosmology of Gerald Schroeder. There was some question of the nature of the redshift and how to relate that to cosmological time dilation. 

Physicist Gerald Schroeder has written four books on the relation of biblical wisdom to modern science. His book, The Science of God, explains his biblical cosmology in detail. I’ve created an illustrated slideshow here (see also the “Six Days” tab at the top) that covers the basics of his model. The gist is that Schroeder is able to convincingly reconcile a literal interpretation of Genesis 1 –six 24-hour days of creation –with a universe that is billions of years old by invoking the phenomenon known as time dilation. That’s the slowing down of time in one reference frame as observed from another reference frame. It’s a scientifically sound model, but it’s also a bit difficult for the average scientific layperson to understand, because it involves one of the trickiest concepts in science — the nature of time.

Even scientifically-literate people get tripped up by the effect of time dilation, because the effect can occur for different reasons. So, it’s no surprise that one of the most commonly misunderstood aspects of Schroeder’s biblical cosmology is the nature of the time dilation effect that gives us six 24-hour days in one frame of reference and 14 billion years in another. It is not due to gravitational effects or comparing two different physical reference frames within the universe. Rather it arises from the following:

  1. God’s reference frame existing beyond space and time, which regards the universe as a whole
  2. the expansion of the universe
  3. comparison of the flow of time between different moments in cosmological history

Schroeder assumes Genesis 1 is told from God’s perspective. God’s reference frame is not any one place within the universe, but from outside the universe, regarding the universe in its entirety. So, to find something to form the basis of the Genesis clock, Schroeder looked for something that takes into account the three points above. He chose the cosmic background radiation (CBR), because it permeates the entire universe, it has existed virtually since the beginning of the universe, and encoded in its properties are the history of the expansion of the universe.

The time dilation for Genesis 1 is based on the expansion of the universe. This is neither special relativity nor a gravitational effect; it is merely a consequence of the stretching of the CBR light waves as the universe expands. This is a well-established effect in cosmology, and one I have to take into account in my own research on distant quasars. For simplicity, if you think of the CBR light waves as a sine wave, then the frequency of the sine wave represents the beat of the Genesis clock. The higher the frequency, the faster the clock ticks off time. If you think of drawing this sine wave on a piece of stretchable fabric representing the fabric of the universe and then stretching this fabric, the length between the peaks on the sine wave gets longer, and hence the ticks of the clock get longer (i.e. slower). So, what’s happening is that as the universe ages and expands, the frequency of the CBR light decreases, and the ticks of the Genesis clock for each moment in time get slower compared with previous moments in time.

That’s how we can measure, from our earthly perspective looking backward in time, 14 billion years, while God measures, from his perspective regarding the universe as a whole looking forward in time, six 24-hour days.

Backyard Astronomy: July 2014

A bit late in posting this, but there isn’t much going on this month, anyway.

July 29-30: Delta Aquarid Meteor Shower. This shower runs mid-July to mid-August and peaks the night of the 29th / the early morning of the 30th. The meteors are debris from Comets Marsden and Kracht. The expected rate is a moderate 20 meteors per hour.

Stuff for this week

Just a note that I’ll be posting here again soon. I experimented with posting daily, but that doesn’t suit me or my schedule. The next experiment will involve posting “approximately whenever I feel like it.” When something interesting comes up, and I have the time, I’ll write.

There are a couple of speaking engagements coming up in September, the announcement of which will occur in the next day or so. If you’re in the Austin, TX area and are interested in topics of science and faith, consider attending. One will be for the monthly Reasons to Believe meeting, which is free and open to the public, the other will be at an apologetics conference that requires registration.

I’ll also be answering a question that came up about Gerald Schroeder’s reconciliation of an old universe with a literal interpretation of scripture, hopefully this week.

 

Backyard Astronomy: June 2014

June 7: Conjunction of the Moon and Mars. Conjunction is when two celestial objects line up along the line of sight from the Earth. For instance, during the New Moon phase, the Moon is in conjunction with the Sun. During this month’s Moon-Mars conjunction, the two objects will come within two degrees of each other in the sky. It’s not super-close — by way of comparison, the angular size of the Moon is just half a degree — but it makes for a nice pairing for binoculars or a telescope.

Moon - Mars conjunction

 

June 22 – July 2: June Boötids meteor shower. As far as meteor showers go, this one’s pretty wimpy. The expected rate is one to two meteor per night, though on rare occasions there can be strong outbursts. Unlike most meteor showers, this one is best viewed in the evening hours.

Replay: An invalid equation

Traffic’s up after the announcement of the publication of our Astronomy and Astrophysics curriculum, so we’re replaying some of our more important posts from the archives for our new readers. This article was originally posted on January 23, 2012

Scientists working in the Netherlands and the U.S. who developed a more transmissible strain of the deadly bird flu have temporarily suspended their work to allow governments around the world time to assess the risks to “biosecurity.” The Dutch and American scientists, who produced their work separately, have submitted their results for publication. The National Institutes of Health, which funded the research, has requested the omission of important details over fears that the information could be used by terrorists to unleash a potentially genocidal attack in the future.

Keep this in mind as you consider what atheist writer and neuroscientist, Sam Harris, says about his “extinction equation”:

religion + science = human extinction.

He argues that religion is the source of all great conflict. Continued conflict with the destructive tools provided by science will result in the destruction of humankind. Therefore, all those who are dedicated to science must work to eliminate religion if humankind is to avoid extinction.

Yet as Christian writer, Vox Day, stated in his book, The Irrational Atheist, if we take Sam Harris’ Extinction Equation seriously, historical evidence shows that the most prudent action we can take is to eliminate science. As a professional astrophysicist who has dedicated her life to science, I must grudgingly concede that Day is correct if we are limited to an either/or choice between religion and science.

From a purely pragmatic point of view, it’s not difficult to choose which variable to set equal to 0 in Harris’ Extinction Equation. It would be exceedingly difficult, if not impossible, to eliminate religion, which has existed in myriad forms for at least several thousands of years. Even religion’s greatest opponents, secular humanists devoted to Darwinism, recognize that the human species demonstrates a deep and enduring need for religion, so much so that even today as much as 90% of people in the world claim to be religious in some form or fashion.

Science by comparison has only been around in its modern form since the time of Galileo. It is understood, supported, and practiced by vastly fewer people around the world than religion is. The scientific method does not come easily to most people, which is why it takes many years of education and training to effectively instill it even in the small minority of humans who are predisposed to it. Science would simply be much easier to eliminate from humankind than religion.

Historical evidence also shows that religion, all by itself, poses far less of a threat to humankind than science does. It is true that throughout history religious groups have made war against each other. But the whole truth is that humans have always fought one another for territory and dominance beginning long before the appearance of modern religions. There is little or no evidence of peaceful coexistence on Earth at any time or place with or without religion. Monotheistic religion is therefore not a basic cause of conflict, but rather a relatively recently added element in the ongoing chaos and conflict of human affairs.

During the thousands of years that religion has existed, the human population has risen from a few million to almost seven billion. Since the time of the Reformation, human prosperity has improved to the point where 75% of humankind has risen out of its natural state of poverty, and there is a well-founded hope that the remaining 25% will follow in the next 50 years. The only threats to human survival during the time of religion were the possibility of an errant asteroid, such as the one that is believed to have wiped out the dinosaurs, and naturally-arising contagious diseases that periodically ravaged civilizations.

Science and technology has changed all of that — there can be no doubt that they’ve had a much greater and more negative impact on human violence than religion ever had. An explosion of technology beginning in the 15th century made it possible for the ongoing conflict to enter the era of modern warfare resulting in new levels of slaughter which eventually led to the horrors of the First World War. The determination of the Nazis to use science to destroy its enemies in World War II rushed humankind to the point where scientific knowledge could result in its utter destruction.

Realistically speaking, and regardless of the dangers, we can’t put the scientific genie back in the bottle. Nor can humans live without some spiritual/moral system. As the world seems on the brink of a preemptive attack (possibly nuclear in nature) to eliminate Iran’s nuclear capability, there is good reason to be pessimistic about the future of humankind. Some kind of moral system must function to prevent scientific knowledge from causing the end of conscious life on Earth. As Vox Day observes, “the more pressing question facing the technologically advanced societies today is Quis eprocuratiet ipsos scientodes? Who will supervise the scientists?”

Does such a moral system exist? Yes, and that’s why I don’t think we face Harris’ either/or choice. Surak explains why here.

Saturday morning astronomy news roundup

The Serpent

The Serpens Cloud Core [Credit: NASA / JPL-Caltech / 2MASS]

NASA’s Spitzer Space Telescope has captured several photos of the Serpens Cloud Core, where moderate-mass stars like our Sun are being born, and compiled them into the beautiful image above. The image is a combination of infrared light, which can penetrate through dust, and visible light, which is blocked by dust. The dark region to the left of the bright core is a region of dust so dense that not even infrared light could get through.

We all know that the Moon (with some help from the Sun) causes the tides on the Earth, but most people probably don’t realize that the Moon also becomes distorted by its gravitational interaction with the Earth. NASA scientists have been studying the distortion using NASA’s Lunar Reconnaissance Orbiter and Gravity Recovery and Interior Laboratory missions. The tidal effect on the Earth is most evident in the ocean tides, because water is easy to move around, but the effect on the Moon, which is mostly solid except for its small core, is to distort it into a slight egg shape. The force of the Earth’s gravitational tugging is sufficient to produce a bulge of about 20 inches on the surface of the Moon. The position of that bulge changes with time so that the Moon appears to wobble in the sky. Earth likewise appears to move around in the sky from the Moon’s point of view.

How the Christian view of time led to modern science

People in the modern West take for granted that events proceed in a line stretching from the past through the present and into the future. They also believe that each point in time is unique—two events can be very similar, but no event or chain of events is ever exactly repeated. This view of time is called linear time, and it is so deeply ingrained in Westerners from birth that it is difficult for them to imagine any other view of time. However, the overwhelming majority of people who have ever lived have had a very different view of this fundamental aspect of existence.

Cyclical Time

Non-linear time

The alternative to linear time is a belief that time endlessly repeats cycles. I have great difficulty convincing my astronomy students that, from an observational point of view, cyclical time makes much more sense than linear time. I ask them to place themselves in the ancient world with no clocks or telescopes or computers, but only their senses to guide them and imagine what they would be capable of understanding about time. The days would be marked by the daily motions of the Sun and other celestial objects rising and setting in the sky, the months would be marked by repeated phases of the Moon, and the years would be marked by the reappearance of certain constellations in the sky. Other cycles in nature, such as the seasons, tides, menstrual cycles, birth-life-death, and the rise and fall of dynasties and civilizations, would dominate ancient life.

It should therefore be no surprise that the religion and worldview of many cultures were based on a belief in cyclical time. Among them were the Babylonians, ancient Chinese Buddhists, ancient Greeks and Romans, Native Americans, Aztecs, Mayans, and the Old Norse. These societies practiced an ancient astronomy called astrology which had as its chief function the charting of the motions of heavenly objects to predict where people were in some current cycle. It was a complicated process, because there are multiple cycles occurring in the heavens at any time, and ancient beliefs were based on the idea that human fate was determined by cycles working within other cycles. As a result, ancient calendars, such as the Hindu and Mayan, were very elaborate with a sophistication that surpasses those of the modern West. The idea of cyclical time continues in the present day with Hindu tradition and native European tradition such as that of the Sami people of northern Scandinavia.

Mayan Calendar

Mayan calendar

Not only does cyclical time make sense in terms of what people observe in nature, but it also satisfies a deep emotional need for predictability and some degree of control over events that the idea of linear time can’t. If time flows inexorably in one direction, then people are helplessly pulled along, as though by a powerful river current, toward unpredictable events and inevitable death. Cyclical time gives the promise of eternal rebirth and renewal, just as spring always follows winter. These pagan beliefs were so powerful that they continue to influence all of us today; for example, the celebration of the belief in the constant process of renewal is the basis for the New Year holiday.

Obviously, all people have thought in terms of linear time on a daily level, otherwise they wouldn’t be able to function. But on the larger scale of months, years, and lifetimes, the notion of linear time was viewed as vulgar and irreverent. A cyclical view of time was a way for people to elevate themselves above the common and vulgar and become connected to that which appeared heavenly, eternal, and sacred. This view also offered a form of salvation in the hope that no matter how bad things are in the world at the moment, the world will inevitably return to some mythical ideal time and offer an escape from the terror of linear time. You and I would consider this ideal time to be in the past, but in cyclical cultures, the past, the present, and the future are one.

Primitive cultures, like the Australian aborigines, had no word for time in the abstract sense—that is, a concept of time that exists apart from people and the world. For them, time was concretely linked to events in their lives—the past, the present, and the future formed an indistinguishable whole as the great cycles determined everything. The ancient Hebrews also had no word for and therefore no concept of abstract time, yet their concept of time was a linear one in which events occurred sequentially. These events formed the basis for their concrete notion of time. Except for the first six days of creation, time as described in the Old Testament was completely tied to earthly events like seasons, harvest, and, most importantly, God’s interaction with the world.

The ancient Greeks also believed the universe was cyclical in nature, but unlike other ancient cultures they also believed in an abstract notion of time that exists separately from events. They had two words for time—chronos and kairos—representing quantitative/sequential time and qualitative/non-sequential time respectively. From chronos we derive familiar time-related words such as chronological, chronic, and anachronism. In order to appreciate the Greek concept of time, one has to understand that to the Greeks time was motion. It’s not difficult to envision since the length of a day is measured by tracing the path of the sun and stars across the sky. When Plato spoke of time, he described an “image of eternity … moving according to number.” His student, Aristotle, said that time is “the number of motion in respect of before and after.”

Plato and Aristotle

Plato and Aristotle from the fresco “The School of Athens” by Raffaello (1510)

The Judeo-Christian beliefs about time that emerged during the time from Moses to that of Jesus mark a profound break with the thinking of the ancient past. Events of the Bible clearly indicate a unidirectional, sequential, notion of time that is utterly counter-intuitive to what the senses observe in nature. Time is not discussed directly in the Old Testament, but we can gain an understanding of the ancient Hebrew notion of time from the language. The ancient Hebrew root words for time were related to distance and direction: the root word for “past” and “east” (qedem, the direction of the rising Sun) is the same; the root word for the very far distant in time (olam), past or future, is also used for very far distant in space.

Perhaps the ancient Hebrews anticipated the early 20th century mathematician Hermann Minkowski, who postulated that space and time are two aspects of a single entity called spacetime.  In any case, the Hebrew practice of viewing time from a perspective that looked backward was eventually adopted by modern astrophysics. The Judeo-Christian concept of linear time developed into our modern view of time and became one of the great foundations of modern science.

Something very powerful was required to overcome the ancient perceptions of and feelings about time.  Though the concept of linear time started with Judaism, it took hold and was spread throughout the Western world by the rise of Christianity. In the fifth century, Augustine noted that the Bible is full of one-time events that do not recur, beginning with the creation of the universe, culminating with the Crucifixion and Resurrection, and ending with the Second Coming and Judgment Day. He realized that Christian time is therefore linear rather than cyclical. The desire for some sense of control and the hope for eternal renewal became better satisfied by a belief in a loving Creator and the resurrection of his Son who was sacrificed on the cross. (It is interesting to note, however, that cyclicality does have some place in Christianity—we are born when we leave the womb and we are reborn when we go to heaven.)

Nearly a thousand years after Augustine made his pronouncement, the era of clock time emerged. Clock time is measured by mechanical apparatuses rather than by natural events, and marks the final triumph of abstract linear time over concrete, cyclical, event-driven time. Mechanical clocks were invented in Europe in the 14th century, followed by spring-driven clocks in the 15th century. Refinements to spring-driven clocks in the 16th century enabled Danish astronomer Tycho Brahe to make his famously accurate celestial observations, which were used by Johannes Kepler to formulate the laws of planetary motion.

Tycho Brahe and his quadrant

Tycho Brahe and his quadrant

However, the motivation for increasing precision in time-keeping was not motivated by pure science, but rather by the application of science in the quest for accurate navigation. Sea-faring navigators required precise measurements of time so that they could use the positions of star-patterns to determine longitude. With these highly precise clocks, it was possible to keep excellent time. It is interesting to note in the phrase “keeping time” the abstract notion, meaning we keep up with the external flow time rather than events defining the concrete notion of time.

It is not a coincidence that the era of modern science began after the invention of high-precision time-keeping devices. Modern science began with the Scientific Revolution in the 16th and 17th centuries, starting with the Copernican Revolution, but it progressed slowly because of a lack of necessary technology. Galileo, for instance, was forced to time some of his experiments by using his own heartbeat. By the late 17th century, Newton had formulated the branch of mathematics now referred to as calculus and published his laws of gravity and motion. His work was based on his belief in a flow of time that was both linear and absolute. Absolute time means that it always takes place at a rate that never changes.

Remember that the ancient Greeks viewed time and motion as one. This is important because the scientific study of motion based on the principle of cause and effect requires linear time. Newton’s laws and his view of time as absolute held sway for almost two hundred years. But Newton suffered from limited perspective just as the ancients had—humans perceive time on Earth as always taking place at the same rate, but that isn’t true. Newton is still considered the greatest scientist who ever lived, but we know now that he did not have the full picture.

Isaac Newton performing an experiment

It was Albert Einstein and his theory of relativity that gave humankind the strange truth about time. By the early 20th century Einstein had succeeded in demolishing Newton’s notion of absolute time, showing instead that time is flexible, it goes by at a rate that is different in different places in the universe, and it is really dependent on the location and movement of the observer of time. It is interesting that the Bible anticipated this in Psalms 90:4, “For a thousand years in your sight are like a day that has just gone by, or like a watch in the night.”

The current scientific view of time is a combination of the ancient Greek abstract notion of time, the Judeo-Christian notion of linear time, and Einstein’s relative time. Cosmology, the branch of physics that deals with the overall structure and evolution of the universe, works with two times: local time, governed by the principles of relativity, and cosmic time, governed by the expansion of the universe. In local time, events occur in the medium of spacetime as opposed to being the cause of time. Time is motion, motion is time, and objects may freely move in any direction in space.

Albert Einstein

Albert Einstein

But the next big scientific question is, can objects also move in any direction in time? Physicists have determined that the arrow of time points in one direction. But how can we determine that direction? Biblically, we understand that time flows from the creation to Judgment Day. Scientifically, it has been less clear.

Ultimately, physicists determined that the arrow of time points in the direction of increasing disorder. A branch of physics known as thermodynamics, the study of how energy is converted into different forms, quantifies disorder using a concept called entropy. The second law of thermodynamics states that in a closed system, entropy (the amount of disorder) never decreases. This means the universe will never spontaneously move back in the direction of increasing order. It is the progression of the universe from order to disorder that provides the direction for the arrow of time.

The linearity and direction of time determined by thermodynamics seemed clear until physicist and mathematician Henri Poincaré showed mathematically that the second law of thermodynamics is not completely true. The Poincaré recurrence theorem proved that entropy could theoretically decrease spontaneously (the universe could go back in the direction of increased order). But, the timescale necessary to give this spontaneous decrease any significant chance of happening is so inconceivably long, much longer than the current age of the universe, there is little probability that it will happen before the universe could reach maximum entropy.

Nevertheless, some Western thinkers mistakenly took Poincaré’s theorem to mean that reality is cyclical in a way that does not provide the ancient escape from the profane to the sacred. This led these thinkers to despair about the possibility that human existence is nothing more than the pointless repetition of all events for all of eternity. Nineteenth century philosopher Friedrich Nietzsche was one who took the Poincaré recurrence theorem to the hasty and illogical conclusion that there was no purpose or meaning to existence. On the other hand, there is little comfort to be gained from contemplating an endlessly expanding universe in which everything becomes hopelessly separated from everything else. One may well wonder if there is no escape from time.

Friedrich Nietzsche

Friedrich Nietzsche — contemplating the Poincaré recurrence theorem?

Christians need not despair. The Bible tells us that the universe in its present form will cease to exist on Judgment Day, which will presumably occur long before there is any significant probability of a Poincaré recurrence, and will certainly make the notion of an endless expansion moot. If that is true, we inhabit a universe that is for all purposes linear and finite in time, and we have a much happier fate than being condemned to a never-ending repetition of meaningless events or a universe that expands forever and ever.

While it is important that Christians understand that modern science confirms the biblical view of time, it is also important that Christians understand the role of biblical belief in shaping modern science. Modern science developed only after the biblical concept of linear time spread through the World as a result of Christianity. True science, which at its root is the study of cause and effect, absolutely requires linear time.

The Bible

Fertile ground for modern science

The foundation of 21st century astronomy and physics is the big bang theory—the “orthodoxy of cosmology” as physicist Paul Davies describes it—which relies on linear time with a definite beginning. The false cyclical view was perpetuated by two human limitations: limited perspective and misleading emotions. It took faith in the Word of God enshrined in the Bible and trust in the scientific method to overcome these limitations so that humankind could understand the true nature of time.

Camelopardalid “storm” a dud

Oh, by the way, the much-anticipated Camelopardalid meteor “storm” (camelopardala)didn’t live up to the hype. We were clouded out here in Central Texas, so I didn’t even get a chance to try to watch it. After predictions of possibly hundreds of meteors per hour, the actual tens per hour that were observed must’ve been pretty disappointing to those who had clear skies and tried to watch.

Astronomy and Astrophysics curriculum for the Southern Hemisphere

We’ve had a few requests to adapt the Astronomy and Astrophysics curriculum for the Southern Hemisphere, so we’re in the process of adapting the labs and activities that currently only work in the North. The publisher will make a special Southern Hemisphere addendum available to those who buy the curriculum. Meanwhile, if you’re from below the equator and have already bought the curriculum, just send me an email and I’ll make sure you get a copy of the adapted activities and labs.

Confessions of scientists

I get the sense that a lot of the general public sees scientists somewhat as caricatures, with cold, Spock-like intellects, genius IQs, and, unfortunately, an almost super-human ability to understand the natural world. Atheists, in particular, have a tendency to regard scientists as members of an infallible priestly class. But we are not. We are merely human, which means we are fallible and susceptible to the same limitations in our thinking and behavior as anyone else. As we’ve discussed here at SixDay, there is a structure built into science that helps overcome those limitations (see here and here), which is why the hard sciences, since the time of Copernicus, have had an impressive track record of helping to lift humankind out of ignorance.

Now, without tooting my own horn, I will admit that scientists are exceptional in a few ways. First, most of us love science, enough to devote our lives to it, and so we’re motivated to do it well. Second, we have an above-average ability to do science. That said, we are all-too-human in every way, as this confessional from a Ph.D. scientist (in biology) shows:

There are some things I need to confess. This isn’t easy to say, but after working as a real scientist with a Ph.D. for 6 years, I feel it’s finally time to come clean: Sometimes I don’t feel like a real scientist. Besides the fact that I do science every day, I don’t conform to the image—my image—of what a scientist is and how we should think and behave. Here’s what I mean …

He goes on to list several traits that show how he doesn’t conform to his image of a scientist. This list isn’t a confessional to the general public, but rather to his scientific peers, who pretty much all have their own images of how a scientist should think and behave. I think the big secret is that most of us sympathize with this biologist, and people not employed in the sciences can benefit from hearing about it.

What follows is a selection of his confessed points with my own perspective added:

I don’t sit at home reading journals on the weekend.

I don’t, either, unless I’m preparing to submit a journal paper. Not very many scientists enjoy reading journals. I know one well-respected scientist who admits that he can only read journals in front of the TV.

I have skipped talks at scientific conferences for social purposes.

I went to a conference in Honolulu a few years ago, and I think I attended a grand total of three talks, including the one on which I was a co-author. A lot of science conferences are held in attractive locations in order to draw more attendance, but sometimes I think they should hold these things in the middle of nowhere.

I remember about 1% of the organic chemistry I learned in college. Multivariable calculus? Even less.

I remember most of what I studied in college, because I either use it or teach it. That said, I have forgotten a significant chunk of what I learned in my grad courses, because a lot of it I simply don’t use. This is not unusual. I have heard some professors admit that if the more seasoned professors in any department had to take the qualifying exam, they probably wouldn’t pass.

I have felt certain that the 22-year-old intern knows more about certain subjects than I do.

Of course s/he does. Nobody can know everything about everything. Only a prideful fool doesn’t acknowledge it.

I have gone home at 5 p.m.

One of my professors told a story in class about a colleague of his, who was a student of a famous Nobel laureate. The Nobel laureate fit the caricature of a scientist in that he spent an inordinate amount of time working (yes, this is exactly the sort of person who wins a Nobel prize). One Sunday morning, the colleague got a call from his Nobel laureate advisor, who said, “I’m so sorry you’re not feeling well.” Not understanding his meaning, the student told his advisor he was feeling fine. “Oh,” said the Nobel laureate advisor sarcastically, “I just assumed the only reason you wouldn’t be in the office is that you were ill.”

Anyway, yes, I, too, have gone home at 5 pm.

I have asked questions at seminars not because I wanted to know the answers but because I wanted to demonstrate that I was paying attention.

I suspect a lot of scientists do this. Personally, I stopped doing it after I got my Ph.D. As a grad student, I had to take seminars for credit, and was expected to participate, i.e. ask questions that showed I was paying attention. These days, I figure if I show up and am not fiddling with my phone or falling asleep, everyone knows I’m paying attention.

I have never fabricated data or intentionally misled, but I have endeavored to present data more compellingly rather than more accurately.

I’m not sure how much this happens, and I guess I’m naive enough to assume that most of the time my peers are presenting their data in a responsible manner. This is an ethical issue I’ve discussed at length with my Ph.D advisor, and I have since adopted his style: present all of our data in one set of tables and graphs and then present a “select” sample in another set that more compellingly makes our case. Anyone reading our papers can decide for themselves if we’ve made our case.

I have pretended to know what I’m talking about.

All I can say is that I’ve not gone out of my way to draw attention to the fact that I sometimes don’t know what I’m talking about.

I sometimes make superstitious choices but disguise them as tradition or unassailable preference.

I’ve never consciously done this, but I know others who do.

When a visiting scientist gives a colloquium, more often than not I don’t understand what he or she is saying. This even happens sometimes with research I really should be familiar with.

This only happens with visitors who speak on topics outside of my area of research. In such cases, I only feel like a dope when a scientist in my field proceeds to ask intelligent-sounding questions of the visitor, which happens more often than I’d like to admit.

I have called myself “doctor” because it sounds impressive.

To paraphrase Dr. Evil, I didn’t spend all those years in grad school to be called “Ms.”, thank you very much. There are times I’ve forgotten I’m a “doctor,” and other times when I’ve milked it for all its worth.

I dread applying for grants. I resent the fact that scientists need to bow and scrape for funding in the first place, but even more than that, I hate seeking the balance of cherry-picked data, baseless boasts, and exaggerations of real-world applications that funding sources seem to require.

I don’t know a single scientist who doesn’t dread applying for grants. The process is unpleasant at best, and the odds that you’ll get funded get worse every year. That said, I don’t resent the fact that scientists need to ask for funding. There is no reason scientists should expect to get other people’s money without making a compelling case for why they should get it. It’s frankly unsettling that any scientists believe they are entitled to funding.

I have performed research I didn’t think was important.

This can happen for at least three reasons: you’re part of a research group and have to participate in certain projects; you’re beholden by stipulations in a grant or something; or if you are pressured to publish a minimum number of papers every year. Thankfully, I’ve never had to do this. I may be doing work that in reality no one else thinks is important, but I never take on a project unless I, personally, think it will add to the sum of knowledge in my area.

In grad school, I once stopped writing in my lab notebook for a month. I told myself I could easily recreate the missing data from Post-it notes, paper scraps, and half-dry protein gels, but I never did.

I once thought I could piece together what I did on a project by just remembering what I did — after all, it was so obvious at the time — and, boy, did that not work out well. A month later, I couldn’t recall half of what I did, and ended up doing the whole thing over.

I do not believe every scientific consensus.

Neither do I. However, that this is being confessed rather than proudly declared is of great concern. Not because this particular person feels this way, but because probably a lot of scientists feel this way, and they shouldn’t. See Surak’s recent commentary on this.

I do not fully trust peer review.

Neither do I, and nor should anyone fully trust it. Our peers are just as fallible as we are. But if you are in a field that’s devoted to the pursuit of truth, mostly proceeds without a lot of politicization and money and emphasis on consensus, then it’s probably good enough.

When I ask scientists to tell me about their research, I nod and tell them it’s interesting even if I don’t understand it at all.

I wonder how much this happens, especially when I’m asked about my own research. Personally, I rarely feign interest. If I don’t understand anything they’re saying, I try to identify something I can at least ask an intelligent-sounding question about.

I was never interested in Star Wars.

Sacrilege! Actually, I know quite a few scientists who have no interest in Star Wars, Star Trek, or any other sci-fi, but they tend to be older. Most younger scientists I know are big fans of sci-fi.

I have openly lamented my ignorance of certain scientific subtopics, yet I have not remedied this.

We all do this. There just isn’t enough time to pursue every subtopic of interest.

I have worried more about accolades than about content.

This is perfectly natural, and there’s nothing necessarily hideously horrible about it, unless a person is primarily motivated by accolades.

During my graduate-board oral exam, I blanked on a question I would have found easy in high school.

One of my friends, who is not a scientist, sat in on my doctoral exam and really enjoys reminding me of the very simple question I blanked on that even he knew the answer to. It happens. The stress of the exam, the exhaustion from writing your dissertation, preparing your presentation, cramming the week before, the committee members all staring at you with the intent of showing you that they still know more than you do, it’s easy to miss a simple question. I know a well-respected scientist who was so irritated about missing some easy questions on his exam that he claimed to have plotted the murders of his committee members for about a week before he finally let it go. It also happened to Heisenberg, one of the founders of quantum mechanics, and his committee almost didn’t give him his Ph.D. because of it. I figure we’re all in good company.

I have feigned familiarity with scientific publications I haven’t read.

Who hasn’t? The open secret is that a lot of scientists just read the abstract or a summary of the work written in another paper. Part of the problem is the sheer volume of publications coming out these days. My advisor recalls a time when he could sit down and at least read the abstracts of every single paper in the Astrophysical Journal, if not entire papers. These days that would be impossible; there are just too many papers, even in a subtopic, to keep up.

I have told other people my convictions, with certainty, then later reversed those convictions.

Me, too, and it’s a good thing. If this never happens, it means your convictions have petrified into dogma.

I have killed 261 lab mice, including one by accident. In doing so, I have learned nothing that would save a human life.

So has your average barn cat. I don’t really see this as a problem. A lot of science consists of “no result,” which is still a result. That being said, one of the nice things about being an astrophysicist is that I don’t have to kill anything, I don’t have to break anything, and I don’t have to create anything hazardous in a laboratory.

I can’t read most scientific papers unless I devote my full attention, usually with a browser window open to look up terms on Wikipedia.

Most papers in my particular topic I can read more casually than this. Anything more broad in scope, however, does require my full attention and some kind of reference material. I am not too proud to admit that I’ve gone back to undergrad textbooks to figure something out I read in a paper. One thing that really helped with this, ego-wise, was listening to one of the world’s greatest scientists at a conference talk about how he struggled with a particular math concept on his way to solving Einstein’s field equations, and that the only way he could figure it out was to study an undergrad textbook on the topic.

I allow the Internet to distract me.

This turns out to be a big problem for a lot of scientists. I have a colleague who keeps a sticky note on her computer monitor reminding her to stop surfing. When I find the Internet too distracting, I make a game out of not allowing myself to read my favorite websites until I accomplish a task, and then I limit the surfing to 10 minutes.

I have read multiple Michael Crichton novels.

Most of us have. Well, in my case, only one Crichton novel, but I have read my share of pulp fiction.

I have used big science words to sound important to colleagues.

Most of us have fallen into this habit.

I have used big science words to sound important to students.

A lot of scientists/professors do this, and I’m not sure why. Some jargon is unavoidable, and serious students have to learn it. However, we’re already in a position to be respected by students, and the focus should be on conveying ideas to them. Personally, it’s much more satisfying to see the light go on than to have students be impressed with me.

I sometimes avoid foods containing ingredients science has proved harmless, just because the label for an alternative has a drawing of a tree.

This made me laugh out loud. We’re all human. Marketing works. If it’s any consolation to my colleagues, it’s at least somewhat based on the science of human behavior.

I own large science textbooks I have scarcely used. I have kept them “for reference” even though I know I’ll never use them again. I intend to keep them “for reference” until I die.

So do I. I can’t bear to part with books, even ones I haven’t read and will probably never read.

I have abandoned experiments because they did not yield results right away.

We all have. Sometimes we go back to them, sometimes we don’t.

I want everyone to like me.

I’m not sure what to say about this one. Is he talking about personally or professionally? There are many scientists who, judging by their behavior, clearly couldn’t care less if anyone likes them personally. But I very much doubt anyone wants to be ostracized professionally.

I have known professors who celebrate milestone birthdays by organizing daylong seminars about their field of study. To me, no way of spending a birthday sounds less appealing.

I dunno. I like my field of study, and my colleagues, well enough that this does sound like a lot of fun.

Sometimes science feels like it’s made of the same politics, pettiness, and ridiculousness that underlie any other job.

It feels like it, because it is. There is nothing about science that removes human nature from the endeavor. That said, I have found the environment in academic science to be a bit less susceptible to this stuff—at least enough that I find my current job a lot more enjoyable than any other job I’ve ever had.

I decry the portrayal of scientists in films, then pay money to go see more films with scientists in them.

For me, at least half the fun is identifying all the ways the movie screws up the portrayal of scientists and science in general. And sometimes, the portrayal is a lot more fun than reality. Actually, most of the time. I remember being tickled by Jodie Foster’s character in the movie Contact. Her big discovery was portrayed very dramatically in the movie, but Carl Sagan, being a scientist, had written it much more realistically in the book—her pager beeped when an automated algorithm detected a possible signal, and she checked it out when she got back to the office.

I have worked as a teaching assistant for classes in which I did not understand the material.

Yeah, but that’s a great way to learn the material. I know a Nobel laureate who claims that whenever he wants to learn about a topic, he teaches a class on it.

I have taught facts and techniques to students that I only myself learned the day before.

Most professors have taught classes in which they are only one or two steps ahead of their students. This isn’t necessarily a bad thing, since, by the time we get to where we are, we have the ability to learn things quickly and disseminate them adequately.

I find science difficult.

If a scientist doesn’t find science difficult, he’s probably not trying hard enough. That said, if he finds it overwhelmingly difficult, he’s in the wrong field.

I am afraid that people will read this confession and angrily oust me from science, which I love.

Nah. Most people in science will read this and think, “Thank goodness someone else said it.”

I have felt like a fraud, not once, but with such regularity that I genuinely question whether anyone has noticed I don’t belong here. I am certain that one day I’ll arrive at work, and my boss will administer a basic organic chemistry test, which I’ll fail, and he’ll matter-of-factly say, “That’s what I thought.”

I felt like this through the first half of grad school, but after I finished my coursework and started producing some good results, I finally started feeling competent and like I belonged. I also realized a lot of other grad students and young scientists felt this way. I remember the shock I felt when one student, whom I regarded as particularly competent, said he felt like a fraud. A lot of us live by the motto “fake it ’til you make it.”

I know I have arrived where I am through privilege, good fortune, and circumstance. Anything I genuinely earned could not have been earned without those precursors.

Anyone born in the West, especially in the U.S. or Canada, at this particular point in history is extraordinarily fortunate. Privilege and circumstance? Not sure what he means by that. I was very focused about getting where I am today and worked pretty hard for it. Most people in the sciences did the same.

I can’t be the only scientist who feels like a fraud. But we don’t talk about it. No one volunteers to proclaim their inadequacies. In fact, scientists go to great lengths to disguise how little we know, how uncertain we feel, and how much we worry that everyone deserves to be here but us. The result is a laboratory full of colleagues who look so impossibly darn confident. They’re the real scientists, we tell ourselves. They can follow the entire seminar. They read journals for pleasure. Their mistakes only lead them in more interesting directions. They remember all of organic chemistry. Pay no attention to the man behind the curtain.

I’ll venture to guess that a lot of people in just about any profession feel the same way. But maybe the reason this hits scientists particularly hard is the near-deification of scientists in this increasingly post-Christian age. It’s a lot to live up to. I think it was easier for scientists to be at peace with their fallibility in decades and centuries past, because many of them believed that what they were doing was fulfilling a sacrament in discovering and revealing God’s Truth. This sounds lofty, but it’s actually a pretty humbling idea.

Maybe the idea of science is easier to love than the minutiae of science. Or maybe the veneer of professionalism is important to protect the integrity and authority of scientists. Or maybe that’s a cop-out.

It’s not a cop-out. First, the idea of anything is always easier to love than the details. This is as true of institutions and professions as it is of people. But I don’t think there is a mere veneer of professionalism in science—there is true professionalism, and that’s vitally important. It’s as important to science as manners are to civilization. It’s just that it’s not perfect, but when it comes to human beings, nothing ever is.