Mindy Perkins – The Stanford Daily https://stanforddaily.com Breaking news from the Farm since 1892 Tue, 02 Jun 2015 03:06:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://stanforddaily.com/wp-content/uploads/2019/03/cropped-DailyIcon-CardinalRed.png?w=32 Mindy Perkins – The Stanford Daily https://stanforddaily.com 32 32 204779320 When robots moonwalk: The value in human experience https://stanforddaily.com/2015/06/01/when-robots-moonwalk-the-value-in-human-experience/ https://stanforddaily.com/2015/06/01/when-robots-moonwalk-the-value-in-human-experience/#respond Tue, 02 Jun 2015 03:06:01 +0000 https://stanforddaily.com/?p=1101781 The vicarious experience of the astronauts’ triumph instills a certain pride in humankind, in what we can accomplish, in what we can dream. We should look for that sense of wonder and fascination in whatever we do. Our experiences have value--and so do we.

The post When robots moonwalk: The value in human experience appeared first on The Stanford Daily.

]]>
I find it mind-boggling that humans have walked on the moon.

Not everybody agrees; I’ve received many a sidelong glance for voicing my conviction. In some ways I understand the what’s-the-big-deal attitude: After all, my generation was born decades after Neil Armstrong’s one small step, eons after Yuri Gagarin became the first man in space. Growing up, we took for granted that being an astronaut was as real a career option as being a ballerina or a firefighter.

Yet to me there is something viscerally astounding about staring at a bright circle in the night sky, so distant and flat it could almost be a paper cutout, and realizing that Homo sapiens have flown to it, walked on it, even taken pieces of it back home. But a small part of me is disappointed that I did not live to see them do it — and may not live to see them do it again.  The economic costs of sending humans into space are phenomenal, and from that standpoint, we’re better off letting robots do the job.

The underlying issue here is that this utilitarian, practical view often assumes that there is no inherent value to aspects of human experience, such as the memories of those humans who walked on the moon and the wonderment of those people who knew they did it. That assumption is a mistake. Exploration, inspiration and fascination are at the core of human motivation and happiness. And while we can’t go to extravagant measures to indulge these feelings, we should not dismiss their weight as we make decisions in a roboticized future.

Modern society makes no secret of its obsession with happiness. The Declaration of Independence claims its pursuit as an inalienable right, and in the 1970s, the king of Bhutan declared it so important that he coined the term gross national happiness to emphasize its importance in societal development. Insofar as any of us values happiness, then we should also value experiences. Numerous studies conclude that good memories make us happier than any possessions ever could. The flipside is that bad experiences can also make us more unhappy than can bad purchases, because we are more invested in what we do than in what we own.

As our understanding of biology and technology improves, we are designing algorithms that can perform the same tasks that humans can, with greater predictability (and often reliability) than their error-prone makers. It’s not impossible to imagine that robots will eventually be able to do anything we can — more cheaply and more efficiently. At that point, humans will be obsolete from a utilitarian standpoint: the Internet Explorer of future technological progress.

Maybe by then, we’ll be ready to cede our lives and our society to our mechanical children. But we’re not extinct yet. As robots displace humans, there will pass an uncomfortable era as jobs we enjoy doing are outsourced to our silicon-based superiors. If a human loves science but a robot is better at it, should the human cease to pursue science because he or she can never keep up with the robot? Or is the happiness that the human garners from learning and researching sufficient that he or she should be allowed to continue the job? Time alone will determine how we answer this question.

Now, any discussion of a robot-dominated future would be incomplete without mention of “The Matrix.” I’d be inclined to worry less about robots trapping humans in virtual reality than humans voluntarily installing themselves there. My hope is that we will not become so enthralled with our imaginary universes that we lose a sense of awe in the exploration of the real world. I’ve argued that we ought to spend less time on our computer screens and more in our three-dimensional surroundings; I reiterate here that there is an entire universe out there for us to study. We could leave it to the robots, but along the way, we would lose a key sense of our own identity as humans and of our connection to the world that produced us.

Robots can do what humans cannot do. Yet it is for this reason that they do not inspire the same kind of appreciation as do the feats of other humans. As our creations, they are not subject to the limitations that we are, do not share our development from childhood to adult, as of yet do not share our emotions and convictions. We are inspired by other people because we see elements of ourselves in them — and by extension, we believe that we, too, could do as they do. We look at them and see hope. The vicarious experience of the astronauts’ triumph instills a certain pride in humankind, in what we can accomplish, in what we can dream. We should look for that sense of wonder and fascination in whatever we do. Our experiences have value — and so do we.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post When robots moonwalk: The value in human experience appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/06/01/when-robots-moonwalk-the-value-in-human-experience/feed/ 0 1101781
Life on the bell curve https://stanforddaily.com/2015/05/18/life-on-the-bell-curve/ https://stanforddaily.com/2015/05/18/life-on-the-bell-curve/#respond Tue, 19 May 2015 00:43:13 +0000 https://stanforddaily.com/?p=1101132 We need statistics to make society and medicine safer and more effective. We need baseline values so that we know when there are problems. Yet ultimately we cannot control every number we measure, and maybe we shouldn’t try. Averages may be calculated from individuals, but individuals can’t be calculated from the average.

The post Life on the bell curve appeared first on The Stanford Daily.

]]>
Pedometers, nutrition facts, FitBit: We’re saturated with apps and devices for keeping track of our individual habits so that we can live more healthful lives. We can monitor how much we sleep or exactly how many calories we eat in an effort to conform to numerical recommendations derived from measurements of populations and statistical analyses of the resulting data.

While the guidelines are valuable and have life-saving clinical applications, focusing too much on numbers can divorce us from the reality we’re attempting to understand. For one, bias is inherent in what we choose to measure: Since it’s impossible to measure everything, we have to pick the characteristics we think are most likely to relate to what we want to understand, which can lead us to overlook other contributing factors or alternate explanations. Then there’s also the question of what we do with the statistics we generate. In making diagnoses, we have to choose where to draw the line between “normal” and “abnormal,” to decide what worries us and what doesn’t. We reason that if we can quantify, we can control–a principle we should rethink in light of probabilistic uncertainties and individual variation.

A probability distribution tells you how common a characteristic is in a population. In other words, if you choose an individual from a population and measure some characteristic, how likely are you to get a certain value?

Sometimes we use distributions to determine how justified we are in making assumptions, like associating tall athletes with basketball. Yet just as not all tall athletes play basketball, not all basketball players are tall. Distributions deal with averages across populations. In everyday life, we deal with individuals. As Daniel Kahneman writes in “Thinking, Fast and Slow,” “Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case.”

The principle behind personalized medicine is to tailor treatment to the patient–to recognize that people are unique, with singular genomes and circumstances that influence their health, and thus the tests and therapies that might be most effective for them. Implementing personalized medicine on a wide scale will require a significant amount of time, effort, individual data collection and processing.

But there’s another question lurking at the fringes: Will personalized medicine be used to recognize individual variation, or to treat individual variation? Is having a genetic propensity for a higher-than-average blood pressure an individual characteristic or a risk? Can we even tell the difference except by using population statistics about the effects of high blood pressure?

Where we distinguish “variable” from “problematic” extends beyond issues of physical wellness. For example, one possible explanation for the apparent increase in autism spectrum disorders (ASD) is ontological: Maybe we’re just recognizing more things as belonging on the spectrum than we did before. If we have indeed broadened the definition of ASD, then where does the spectrum “start?” What exactly are we considering to be “normal” human behavior? Could we continue to broaden the spectrum to include any slight deviation from “normal?”

These same questions apply to other facets of mental function and personality as well. When do you stop being melancholy and start being depressed? When do you go from being easily distracted to having ADD? Take the argument to its extreme and we could classify every personality quirk as a mental disorder in need of treatment. What are we actually treating? Are we lumping people who don’t need help with those who genuinely do?

Maybe this is all alarmist. But the issue is especially relevant in light of two considerations: (a) how much data we now collect about ourselves, which increases the opportunities we have to quantify our characteristics and compare them to numerical averages; and (b) an inclination in the medical system to do too much rather than too little.

In a recent article in the New Yorker, Atul Gawande notes that there is a tendency in American society to err on the side of over-diagnosis and overcompensation–that we’re so afraid of missing something potentially harmful that we’ll go to extreme measures to address any “abnormalities.” Statistics play a vital role in this process: For every medical test, there’s some probability it will give a false positive or a false negative; for every measurement, there’s some probability of error from noise; for every symptom, there’s some probability it actually indicates a disease; for every treatment, there’s some probability it won’t work. And there is some probability that the measured phenomenon is due to individual variation from the norm that doesn’t actually pose a problem.

We need statistics to make society and medicine safer and more effective. We need baseline values so that we know when there are problems. Yet ultimately we cannot control every number we measure, and maybe we shouldn’t try. Averages may be calculated from individuals, but individuals can’t be calculated from the average.

 

Contact Mindy Perkins at mindylp ‘at’ stanford.edu. 

The post Life on the bell curve appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/05/18/life-on-the-bell-curve/feed/ 0 1101132
Groupishness in the age of the Internet https://stanforddaily.com/2015/05/04/groupishness-in-the-age-of-the-internet/ https://stanforddaily.com/2015/05/04/groupishness-in-the-age-of-the-internet/#respond Tue, 05 May 2015 04:16:02 +0000 https://stanforddaily.com/?p=1100373 The sheer size of the Internet community serves as a double-edged sword, giving individuals an easy way to find others that will accept them but at the same time giving them the means to insulate themselves from differing views.

The post Groupishness in the age of the Internet appeared first on The Stanford Daily.

]]>
Most technologies don’t overhaul human motivations. Instead, they act as enablers for preexisting biological or cultural tendencies — new outlets for old habits. Once a technology becomes pervasive enough, it may significantly influence cultural trends, but many of the principles underlying behavior before and after the technology remain substantially the same. People do what they do regardless of the avenue.

The Internet is a prime example of an enabler for social behaviors on an unprecedented scope. Winston Shi points out that “while the way we live our lives has changed, friendship is fundamentally the same.” Social media doesn’t instill a desire for connection; it just facilitates ways of building it. Forums, blogs and essentially any sites that allow people to comment on them can help like-minded people find each other. The resulting communities create a real sense of comfort and belonging for the people that find them and often provide a valuable platform for marginalized groups. The flipside, though, is that it can become dangerously easy to fall into homogenous, polarized camps — to segregate ourselves along ideological lines. The consequences for us — and for our societies — are profound.

In his book “The Righteous Mind,” moral psychologist Jonathan Haidt remarks on the inherent “groupishness” of humans. We like to be a part of something bigger than ourselves. It causes us to gravitate toward mosh pits or orchestras, helps us organize our loyalties and our social circles. Because of it, we work toward the good of people other than ourselves.

But belonging to a group comes at the cost of dividing everyone into two categories: “us” and “them.” Perhaps counterintuitively, it’s easier to do that in a bigger community, just as a matter of sample size. Researchers at the University of Michigan found that in smaller schools, people are more likely to have diverse friendships, simply because it’s harder to find other people who are like them. But the larger the school, the less diverse their personal relationships tend to be, since it’s easier and more comfortable to band with like-minded individuals.

We also tend to read opinions we already agree with — to, as Haidt puts it, look for reasons why we can believe things we intuitively like and reasons why we mustn’t believe things we intuitively dislike. Studies on Internet habits reflect this tendency: People on the web segregate by political opinion, preferentially reading blogs they agree with, which in turn preferentially link to other blogs of the same partisan alignment. Political retweets follow the same pattern (although, interestingly, mentions do not). And both liberals and conservatives have their own brand of dealing with opinions that differ from theirs: On Facebook, liberals are more likely to block or un-friend people for making political statements they disagree with, while conservatives are less likely to see these statements in the first place.

Thus, the sheer size of the Internet community serves as a double-edged sword, giving individuals an easy way to find others that will accept them but at the same time giving them the means to insulate themselves from differing views.

What do we do? Research by numerous universities and companies such as McKinsey indicate that the synthesis of disparate opinions leads to higher-quality, more creative work and better financial performance at the office. But unease with differences — and the lack of obvious commonalities around which to congregate — can erode loyalties in large, diverse communities like cities, where people are less likely to volunteer, give to charity and trust their neighbors, according to research by Harvard political scientist Robert Putnam. We are enriched by our interactions with people who aren’t like us, forming bonds over what makes us unique — yet we can feel like outcasts without people that share our beliefs and experiences. How do we encourage diversity without division? How do we encourage unity without erasure?

There may not be a good answer, and reasonable people will disagree. It is essential that we realize the implications of living in an era where technology gives more people across the globe more opportunities to interact with each other — or to ignore each other.

So what groups are you a part of? Why do you count yourself as a member of these groups? What makes them groups? Now ask yourself: Are there groups that overlap with these groups? That contain these groups? What makes them groups? What unites people in these groups?

Not everybody has to be your best friend. The vast majority of people won’t be. But everyone in any group has something in common. Think about what that is, about what that means beyond the instinctive ideological camps to which we adhere — sometimes blindly, sometimes consciously, sometimes instinctively, sometimes self-righteously.  Because until we stop believing “we” are right and “they” are wrong, we may never see the intersections of our communities for the opportunities they are.

 

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Groupishness in the age of the Internet appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/05/04/groupishness-in-the-age-of-the-internet/feed/ 0 1100373
Nerd is the word, geek is the speak https://stanforddaily.com/2015/04/20/nerd-is-the-word-geek-is-the-speak/ https://stanforddaily.com/2015/04/20/nerd-is-the-word-geek-is-the-speak/#respond Tue, 21 Apr 2015 03:16:05 +0000 https://stanforddaily.com/?p=1099449 Whether we embrace our obsessions with Lord of the Rings or particle physics, or whether we really believe that the geeks get the girls, there’s no better time than now for putting aside concerns of social awkwardness and focusing instead on the interests and inclinations that mark our contributions to our jobs and our communities.

The post Nerd is the word, geek is the speak appeared first on The Stanford Daily.

]]>
“In a perfect world, all the geeks get the girls” — at least, according to American HiFi. For those unversed in punk rock from the mid-2000s, this song outlines a boy’s unexpectedly successful romantic/sexual encounter with a girl at a bar. More than the incident itself, the lyrics speak to times when we feel socially awkward and yet somehow “get lucky,” encouraging audience empathy with the “loser” protagonist.

The song, aptly titled “The Geeks Get the Girls,” is just one early example of an ongoing trend: the inclusion of geeks and nerds in popular culture. In many ways, this movement is heartening progress towards recognizing a group of society’s historically unpopular outcasts. While geeks and nerds of yore were almost unilaterally considered to be off-putting, socially inept brainiacs, today the terms are less insulting and more inclusive — a result of the changing attitudes toward technology in the digital age. Along with these new definitions comes the prospect of greater acceptance for nerds and geeks themselves.

By today’s popular usage, what is a geek or nerd? The question is a loaded one, and I won’t encapsulate the whole debate here. A popular distinction is that geeks are enthusiasts and fans while nerds are intellectuals and practitioners — so a Star Trek geek will tell you what class of starship Kirk commands in the original series, and a Star Trek nerd will build it for you (or try). The terms are not mutually exclusive; many nerds are geeks and vice versa, and theoretically one can be a nerd in one area and a geek in another. A number of traditional dictionary adjectives, such as “peculiar,” “unfashionable” and “awkward,” need no longer apply, although in popular consciousness, nerds are often associated with computers more than with other scholarly pursuits.

Nonetheless, the old stereotypes are far from disappearing, though they may be invoked more affectionately than maliciously. For example, the wildly popular TV show “The Big Bang Theory,” now in its eighth season, follows a group of three socially awkward male scientists plus one engineer as they navigate work, friendship and romance. The overall tone of the show is playful, paying clever tribute to “real” nerds while obviously exaggerating certain stereotypes on-screen.

Yet as with any media pursuit, there are some uncomfortable tendencies in the show. Most of the characters are preoccupied with sex or sexual attractiveness, all but one of the leads is white and of the four main women on the show, the only one that is not supposed to be socially awkward is also the least brainy. We don’t expect the media to be an accurate portrayal of life, but these kinds of issues of representation continue to shape cultural perceptions in sometimes harmful ways.

So does popular culture’s mantra that “smart is the new sexy” really imply greater social acceptance for the awkward, the peculiar and the unfashionable? Or do people feel kindly toward nerds and geeks on the screen but not on the street?

There is reason to think that the attitude toward geeks and nerds — not just their celebrity counterparts — is changing. In this regard, the technology boom of the modern era has been a real boon. Society’s increasing reliance on digital devices and apps has opened the doors for nerdy engineers and programmers to exercise their talents for ends nearly everyone appreciates. One educational psychologist suggests that “geek” and “nerd” have lost many of their negative connotations in our generation because the shift from the manufacturing to the information age has made more traditionally geeky or nerdy inclinations economically and socially valuable.

Regardless, is it possible that more inclusive definitions of the words just obscure the subset that is still singled out? This is a difficult and complicated issue to tackle. An indirect but hopeful sign of progress is the individual’s choice to identify as a geek or nerd, often as a symbol of pride (think “geek chic” or “nerd nation”). Those who voluntarily adopt an identity are less likely to feel like outcasts.

Furthermore, the Internet gives geeks and nerds a refuge and a community. Instant access to themed discussion forums, chat rooms and posting/blogging environments like Tumblr give people an easy way to meet others that share their interests. For geeks, this is a rich opportunity to share one’s enthusiasm with similarly enthusiastic peers and to discuss news, advice and personal experiences with others who understand one’s obsessions and struggles. Fandoms are a prime example of online geek havens, which form what analyst Henry Jenkins calls “knowledge communities” dedicated to the “dynamic and participatory” acquisition of information related to a common interest. One needs look no further than ThinkGeek to see that fan communities have even penetrated the market.

Whether we embrace our obsessions with “Lord of the Rings” or particle physics, or whether we really believe that the geeks get the girls, there’s no better time than now for putting aside concerns of social awkwardness and focusing instead on the interests and inclinations that mark our contributions to our jobs and our communities.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Nerd is the word, geek is the speak appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/04/20/nerd-is-the-word-geek-is-the-speak/feed/ 0 1099449
On display: Museum exhibits in the digital age https://stanforddaily.com/2015/04/06/on-display-museum-exhibits-in-the-digital-age/ https://stanforddaily.com/2015/04/06/on-display-museum-exhibits-in-the-digital-age/#respond Tue, 07 Apr 2015 04:04:29 +0000 https://stanforddaily.com/?p=1098387 Going forward, museums face a plethora of challenges relating to broad audience engagement. To avoid becoming obsolete, museums should highlight the aspects of their exhibits that visitors can’t get elsewhere, such as interaction with original artworks or specimens.

The post On display: Museum exhibits in the digital age appeared first on The Stanford Daily.

]]>
In an era where information and entertainment are readily available through the Internet, museums face a new obstacle: How do they remain relevant when visitors can access content so easily online?  The answer many have found is to redesign their exhibits to be more “engaging” — for example, by adding interactive components so that visitors can participate in the experience.  But to effectively engage many different kinds of people can present a significant challenge.

A logical first question is whether or not museums even have a place in the digital landscape.  In a recent article in the New York Times, Holland Cotter explains why he believes digital art collections — such as Stanford’s own Imagebase — can never live up to real ones.  He cites the inability of virtual reality to capture the size and scope of works; their arrangement in a physical space and the viewer’s ability to examine works from different angles to reveal details, textures and tricks of the light.

The same kind of reasoning applies to other types of galleries as well.  Thus, museum exhibits have at least two major roles that the Internet can’t fulfill.  One, they engage senses more fully than do screens.  And two, they contain specimens: artifacts from ancient cultures, documents from historical figures, rocks from the surface of the moon.  Good displays should capitalize on these unique dimensions, teaching visitors and simultaneously encouraging them to explore.

One of the most successful demonstrations of this technique are the new labels in the Rijksmuseum in Amsterdam, which ask visitors to consider various philosophical questions related to art pieces, to ponder why they like or dislike certain works and even to rethink established notions of great art and culture.  In this way, museum-goers are invited both to examine the art and to internalize what they’re seeing.

Yet museums face the additional challenge of appealing to a broad audience, and some interactive exhibits are in danger of falling short.  At the Exploratorium in San Francisco, researcher Toni Dancu works to determine why the overwhelming majority of children in the science exhibits are male.  Part of the reason seems to be that the interactive displays tend to stress competition, while girls are generally more interested in collaboration and storytelling.  Dancu’s Ph.D. dissertation outlines suggestions for making exhibits more appealing to girls, such as including applications to the community and incorporating female role models.  She is planning further research to identify exhibit design choices that appeal equally to girls and boys.

Another change museums are undergoing as they experiment with new interactive exhibits is a subtle but pervasive shift toward extroverted media.  As a child I used to visit the Hall of Life on the third floor of the Denver Museum of Natural History (now the Denver Museum of Nature and Science).  The quiet gallery housed mostly specimens of human organs.  It has since been replaced by Expedition Health, a series of rooms outfitted with screens and activities at every display.  For me, the voices and flashing images, the shrieks of children, the jostling masses of people waiting for their turn at each interactive station, create an intensely overwhelming atmosphere that leaves me little energy to absorb the content of the exhibit.  Why should I put up with these crowds in order to watch videos I could more comfortably see at home?

For introverts, a little bit of quiet and the opportunity for solitary reflection is necessary to focus and to think deeply about a subject.  Exhibits like Expedition Health are hardly conducive to this kind of learning.  Perhaps the exhibit could better appeal to those in need of quiet by including another room with a series of interactive displays that aren’t as fast-paced or hectic.

However, Expedition Health does do one thing right: It creates a personalized experience.  Throughout the exhibit, visitors can keep track of their own heartbeats, gaits, heights and weights, as well as their performance on various tasks.  These records are printed out at the exit for visitors to take with them.  The idea of an individually-tailored experience has also been implemented in other museums in the form of Bluetooth “beacons” that give smartphone users information on what they’re looking at and the opportunity to share images and tidbits over social networks.

Going forward, museums face a plethora of challenges relating to broad audience engagement.  To avoid becoming obsolete, museums should highlight the aspects of their exhibits that visitors can’t get elsewhere, such as interaction with original artworks or specimens.  Above all, they should avoid presuming a short attention span or lack of interest from the audience.  We all have something to gain from a space that engages our minds in novel ways and draws us to consider both the world around us and our place within it.  And for that purpose, museums are indispensable.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu. 

The post On display: Museum exhibits in the digital age appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/04/06/on-display-museum-exhibits-in-the-digital-age/feed/ 0 1098387
Technology as a last resort https://stanforddaily.com/2015/03/02/technology-as-a-last-resort/ https://stanforddaily.com/2015/03/02/technology-as-a-last-resort/#respond Tue, 03 Mar 2015 05:05:32 +0000 https://stanforddaily.com/?p=1096874 The world is full of unsolved problems. It is also full of problems for which solutions already exist, if we only leverage them. When we slow down for a minute, consider the available options, and more carefully assess the consequences of various modes of action, we have a better chance of directing our efforts where they ought to go--for the good of ourselves and the issues we face.

The post Technology as a last resort appeared first on The Stanford Daily.

]]>
When was the last time you were hanging out with a friend and decided to check your texts, e-mail or Facebook to fill a lull in conversation? Or how about the last time you were in a noisy environment you could easily have left, but instead tried to block out the sound by blasting music or wearing noise-canceling headphones? Have you ever taken antibiotics for a cold that probably would have gone away on its own? Drunk caffeine to stay awake and work even though a few hours of sleep would probably have made you more productive?

We have a tendency not to see our crutches – to conveniently treat symptoms instead of inconveniently address causes. Historically there have been many ways to do this.  Technology is simply one of the most pervasive. Even on a societal level, our instinct is often to throw more technology at a problem, even when there are other, more effective approaches that should be considered first.

Take air pollution from automobiles.  There are massive efforts underway to use more electric and hybrid cars in place of traditional gasoline-run cars, including financial incentives to use more efficient vehicles.  Yet manufacturing a car produces an amount of carbon dioxide comparable to that released by the car in its lifetime, unless the car is driven for a longer time than most people keep their automobiles.  This includes electric cars, which use more expensive, rare and processing-intensive materials than their petroleum-burning counterparts, and which also have components such as the battery that are difficult to recycle once the car is retired.  Combine that with the so-called Jevons paradox – make something more efficient and people will use it more – and suddenly having more technologically advanced vehicles doesn’t seem like the best way to cut air pollution.

The most straightforward and longest-term solution is to find alternatives to driving: Walk more, bike more, use more public transit.  Because so many people travel in cars alone, even flying in an airplane is more energy-efficient per person than driving.  At the very least we could do a better job of getting the most high-emitting vehicles off the road – perhaps by removing exemptions on old cars from emissions testing, or by increasing the cost of driving these vehicles beyond just the cost of repair.  (I won’t even go into the issue of inefficient ships.)  None of these solutions require better technology.  They require smarter use of what we already have.

This is not to say we should stop researching new energy and transportation technologies.  It just means we need to know when it makes sense to implement them.  Right now further increasing the efficiency of the most efficient cars will make less of a difference than removing the worst vehicles from the road and cutting back on usage overall.  In terms of new electric cars, we ought to consider factors like their cost from cradle to grave, and whether we can develop more efficient manufacturing processes or more common materials to reduce this cost before we saturate the industry with them.

The question of when to use new technology and when to take a step back to consider the consequences also applies to issues beyond sustainability – for example, health care.  A fair amount of research goes into trying to keep the dying alive longer.  This is a noble pursuit with some ignoble consequences. Atul Gawande’s recent bestseller “Being Mortal” addresses the issue in eloquent detail, but in short, end-of-life care often comes at the expense of well-being and dignity.  It is generally more aggressive than patients want and may also hinder the ability of informal caregivers (such as family and friends) to adjust to the loss.  End-of-life conversations are associated with better quality of life in the final days as well as lower healthcare costs – and they don’t require any technology at all.

In the case of end-of-life care, focusing on technological progress dodges the central issue: Should we always prioritize life over death?  This is just one example of when focusing too much on technological progress can obscure bigger, more important questions.  An admittedly trite metaphor is new action movies.  While flashy special effects may sell, they cannot substitute for character and story.

The world is full of unsolved problems.  It is also full of problems for which solutions already exist, if we only leverage them.  When we slow down for a minute, consider the available options and more carefully assess the consequences of various modes of action, we have a better chance of directing our efforts where they ought to go – for the good of ourselves and the issues we face.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu

The post Technology as a last resort appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/03/02/technology-as-a-last-resort/feed/ 0 1096874
Urban ecosystems: An interview with Denis Hayes (Part II) https://stanforddaily.com/2015/02/16/urban-ecosystems-an-interview-with-denis-hayes-part-ii/ https://stanforddaily.com/2015/02/16/urban-ecosystems-an-interview-with-denis-hayes-part-ii/#respond Tue, 17 Feb 2015 03:20:23 +0000 https://stanforddaily.com/?p=1095794 Another critical component of turning dream into reality: build on what’s already there. The good thing is, once the path is there, it’s easier to make people walk. In light of Denis’s lessons in leading by example, we can consider an ongoing environmental effort on campus: the student-led petition to replace Stanford’s grass landscaping with drought-tolerant native plants.

The post Urban ecosystems: An interview with Denis Hayes (Part II) appeared first on The Stanford Daily.

]]>
In a previous column I outlined Denis Hayes’s role in constructing the Bullitt Center, a self-sustaining green building in Seattle. Now I would like to take a step back to look at Denis’s history with Stanford and what we can learn from him about turning thought into action.

Denis not only attended Stanford but later served as an adjunct professor and on the governing board. He chuckled when I asked him how his experience as a student prepared him for what he’s doing now. “You mean how did seizing and occupying buildings prepare me to build one?” He went on to explain that in law school he gained experience negotiating with contractors and addressing financial considerations – which were “vastly more complicated” than he expected.

Such practical skills for navigating society and industry are as key to making change as vision and ambition. For Denis, an integral part of realizing his idea was finding people who had a relaxed schedule and the determination to figure out how to make things work. Some building developers want to do everything as quickly as possible, but novel projects like the Bullitt Center require flexibility and resolve to develop new technologies and methodologies along the way. The same could be said of cutting-edge efforts in other fields.

So what can Stanford students do to help turn talk into walk? With regard to sustainability, Denis advises looking inward to the University itself and exerting leverage on the administration. Consider how much construction is going on on campus right now: a demolished library, three new dorms and a renovated gym, to name a few. New buildings are a golden opportunity for Stanford to start thinking in the long term. Although developers want to offer the lowest upfront price they can, the most durable and resilient buildings often have the highest upfront costs. Yet such buildings are not only cheaper in the long run but also a better investment: “You have a secure rate of return in things that decrease utility bills and running costs,” Denis explains.

Another critical component of turning dream into reality: Build on what’s already there. Denis reminds us to tap into existing university resources, such as the renewable energy expertise of Professor Emeritus Gilbert M. Masters, to help shape new sustainability programs and educational efforts.

And the good thing is, once the path is there, it’s easier to make people walk. In light of Denis’s lessons in leading by example, we can consider an ongoing environmental effort on campus: the student-led petition to replace Stanford’s grass landscaping with drought-tolerant native plants. Right now resistance comes largely from the upfront costs of redoing Stanford’s lawns, including time, money and convincing people that the University’s image won’t be tarnished by the change. Since the new construction projects will require new lawns, we have a great opportunity to implement native vegetation and persuade people that the transition makes long-term sense – and doesn’t decrease the campus’s beauty at all.

What about pursuing change in the world outside sustainability? Denis has experience with that as well. As an undergraduate at Stanford, he got involved with social activism. With issues like civil rights and the Vietnam War making daily headlines, “the world had a lot of immediacy,” he recalls. Students in his generation felt that there was no community to champion those causes, so they took the burden on themselves. During his senior year he set out to ban classified research at Stanford – a goal others thought exceedingly difficult, but one he saw achieved by the time he graduated. Of the accomplishment, he says, “Idealism coupled with determination can let you go farther than people think you can go.”

His generation had a desire to “do it better and smarter” and to “pass on a better world than we inherited from our parents.” But in his opinion, “we kind of failed you.” He would advise Stanford students to get engaged while they still have the flexibility, energy and intellectual base to give them confidence. “The most valuable thing in your life is your time,” he says.  “You shouldn’t waste it waiting to be five or ten years older before you get involved.”

At 70 years old, Denis is free to admit that there is “not a man [his] age” who wouldn’t give everything he has for youth. But the years he’s spent in pursuit of change have given him an equally valuable skill set: knowledge, experience and the foresight to pass them on. And these qualities are just as important for making a difference – one building at a time.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Urban ecosystems: An interview with Denis Hayes (Part II) appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/02/16/urban-ecosystems-an-interview-with-denis-hayes-part-ii/feed/ 0 1095794
Urban ecosystems: An interview with Denis Hayes (part I) https://stanforddaily.com/2015/02/02/urban-ecosystems-an-interview-with-denis-hayes-part-i/ https://stanforddaily.com/2015/02/02/urban-ecosystems-an-interview-with-denis-hayes-part-i/#respond Tue, 03 Feb 2015 05:41:38 +0000 https://stanforddaily.com/?p=1094815 Ultimately Denis envisions entire green cities that function like ecosystems, exchanging materials and energy throughout the system in a self-sufficient way. Already there are experiments with sustainable buildings on a larger scale, such as the ecodistricts targeted for Washington DC and Los Angeles.

The post Urban ecosystems: An interview with Denis Hayes (part I) appeared first on The Stanford Daily.

]]>
Wouldn’t it be cool if cities were like ecosystems — efficient, resilient and self-sustaining?

That’s the principle behind the Bullitt Center, a commercial office building in Seattle that is three times more energy efficient than most buildings its size.  Outfitted with low-flow toilets, automated blinds and a geothermal heating system (among other features), the Bullitt Center is designed to supply all its own needs and then some.  While the building’s “greenness” alone is impressive, the process behind its construction is even more important as an enabler for future efforts in sustainability.

I had the distinct pleasure of speaking with Denis Hayes, president of the Bullitt Foundation, about his involvement with the Center.  A Stanford alum, Denis was the principal national organizer of the first Earth Day in 1970, and has continued to promote sustainable practices through service with numerous organizations and an ambitious vision for healthy, living cities.  Denis’s story is a remarkable example of being the change you wish to see in the world.

Denis’s vision for the Bullitt Center was to create a “living” building that recycled resources and provided a comfortable, naturalistic environment for people inside and out.  But pushing biomimetic projects through industry can be a challenge.  Architects tend to embrace biomimicry with the understanding that humans can learn a great deal from nature.  Developers and bankers, however, generally don’t want to take the risk of investing in such technology.  Luckily, this was less of an issue for Denis, since the Bullitt Foundation put up the incremental funds itself so it didn’t have to rely as much on borrowed money.

The biggest obstacles the foundation encountered were regulatory.  “It’s pretty much illegal to build a green building,” Denis remarked, adding that they had to work through many layers of bureaucracy in order to install “dramatic” amounts of glass, implement composting toilets, and even catch rainwater, which is outlawed in many states.  On the bright side, now that a “regulatory pathway” is established, a second building will be easier to construct.  And just one wedge in the industry goes a long way toward convincing reluctant developers that self-sustaining architecture is not only viable, but practical, affordable and pleasant.  It’s easy to dismiss something as “flaky […] until somebody has something you can see, feel and touch.”

Indeed, the Bullitt Center reaches out constantly to anyone, anywhere who can help bring about the transition to sustainable cities.  The Center has public tours six days a week; recent visitors include representatives from Disney and Google, as well as the president of Bulgaria.  While not everyone can construct an entirely green building, there are many who are willing to pursue the most important objectives, like energy neutrality and toxin flushing.  (“The composting toilets are a harder sell,” Denis jokes.)  Yet as they say, “big oaks from little acorns grow.”  Enough small steps go a long way toward establishing the legitimacy — and eventually the normality — of sustainable practices.

Of the Bullitt Center’s unique amenities, Denis is personally most proud of the solar panels.  He has spent 50 years advocating for solar power, and now with his help, the cloudiest city in the lower 48 states has a six-story building that could run exclusively on the sunlight that hits its roof.  The 575 panels even generate excess energy that is sold to Seattle City Light, a local utility company.  That the Center supplies power beyond its own walls is a gesture as nice for its commercial benefits as for its symbolism.

Ultimately Denis envisions entire green cities that function like ecosystems, exchanging materials and energy throughout the system in a self-sufficient way.  Already there are experiments with sustainable buildings on a larger scale, such as the ecodistricts targeted for Washington, D.C. and Los Angeles.  Such efforts — including the Bullitt Center — are effective because they lead by example.  They work in practice, not just in theory.  It’s a fact Denis recognized that will help ensure that the movement for nature-friendly urban environments continues to gain momentum.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Urban ecosystems: An interview with Denis Hayes (part I) appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/02/02/urban-ecosystems-an-interview-with-denis-hayes-part-i/feed/ 0 1094815
From Homo sapiens to Homo technologicus https://stanforddaily.com/2015/01/19/from-homo-sapiens-to-homo-technologicus/ https://stanforddaily.com/2015/01/19/from-homo-sapiens-to-homo-technologicus/#comments Tue, 20 Jan 2015 05:44:57 +0000 https://stanforddaily.com/?p=1094037 Yet we should take heart in the biological adaptability of humans to changing circumstances, even as we are mindful of the trade-offs we make as a result of technological dependence--like exchanging nut-chewing for intelligence. When we imagine the kind of future we want our descendants to inherit, we should bear in mind that they will not be Homo sapiens as we know them. They will go on evolving right alongside technology--and maybe complicitly, we will have contributed to that evolution.

The post From Homo sapiens to Homo technologicus appeared first on The Stanford Daily.

]]>
We tend to think of humans as developers of technologies. But how often do we think about technology as a shaper of humans? Usually when we do so, we imagine changes to our individual lives: The advent of the internet, for example, or the emergence of Facebook. Yet throughout history, technology has affected humans on a much deeper level – that of biology. As technology and culture continue to influence human life on a broad scale, we should consider the possibility that current trends and social behaviors may affect human evolution generations down the line.

Technology has been at the heart of our species since the dawn of humanity. Perhaps as early as 3.4 million years ago, the modern human ancestor Australopithecus afarensis was using stone tools to strip meat from the bones of large mammals. By about 12,000 years ago humans were practicing agriculture, which enticed people to abandon their nomadic lifestyles and develop the technologies we now recognize as essential to civilization.

Technology – and by this I mean physical tools as well as ideas and practices – has influenced human biology in some surprising ways. For example, meat-eating in humans, which in the archaeological record is tied to stone tools for crushing, cutting, scraping, and hunting, may underlie the evolution of intelligence. Scientists have puzzled over a link between weak jaws and bigger brains, the key idea being that skull growth is inhibited by large jaw muscles. Weak jaw muscles are a disadvantage where serious biting and chewing are concerned. Eating meat requires less robust dentition than eating nuts, seeds and berries, so a diet change would remove the pressure for strong jaw muscles and pave the way for bigger brains – in addition to enacting a plethora of other alterations to the skull and gut. (Meat also supplies more energy per unit weight than plants, ensuring that humans could power all that extra gray matter.)

In fact, cooking may make strong jaw muscles even less necessary because heating tough, fibrous foods softens them.  And according to Alfred W. Crosby in “Children of the Sun,” a major revolution in human history occurred when ancestral humans began cooking, providing the genesis for complicated social behaviors and expansion to previously unoccupied ecological niches. So from the perspective of what’s “natural,” diets like raw veganism may not make sense for modern humans. Eating meat and cooked food may be as entrenched in our genes as it is in our society.

The extent to which human-driven phenomena, including technology and culture, can affect human nature itself raises an interesting philosophical question: Should our goals for societal development include mindfulness of how Homo sapiens evolve in response?

The issue is particularly interesting given that the rate of human evolution may actually be increasing because there are more humans alive today than ever before. And if the fertile really do inherit the Earth, what might that say about future population demographics given the differential reproductive rates of people in developing vs. developed countries and even within nations between, say, the average Mormon and the average non-Mormon?

But most controversial is the notion that human races may be evolving away from each other as environmental and cultural factors create situations that favor different traits. Such an effect may already be observed in the lactose tolerance of ethnicities that historically domesticated cows and the different levels of alcohol tolerance between ethnicities (including the famous Asian flush). For those of us who don’t want Homo sapiens to fracture into multiple distinct species, it’s worth taking a moment to think about the consequences of emphasizing differences over commonalities. Some conflict may be inevitable, but a rhetoric of unity should eventually dominate our societal conversation.

Although the rate of technological change far outpaces that of human evolution, the only “natural state” for humans – and for the Earth – is flux. We can’t deny that we are interconnected with technology. We can’t turn back the clock and become hunter-gatherers in the sense that our ancestors were, any more than we can exactly predict the evolutionary consequences in a million years of what we do today.

Yet we should take heart in the biological adaptability of humans to changing circumstances, even as we are mindful of the trade-offs we make as a result of technological dependence – like exchanging nut-chewing for intelligence. When we imagine the kind of future we want our descendants to inherit, we should bear in mind that they will not be Homo sapiens as we know them. They will go on evolving right alongside technology – and maybe complicitly, we will have contributed to that evolution.

Assuming the robots don’t take over first, of course.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post From Homo sapiens to Homo technologicus appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/01/19/from-homo-sapiens-to-homo-technologicus/feed/ 2 1094037
Screen time: When the world is flat https://stanforddaily.com/2015/01/05/screen-time-when-the-world-is-flat/ https://stanforddaily.com/2015/01/05/screen-time-when-the-world-is-flat/#respond Tue, 06 Jan 2015 04:55:13 +0000 https://stanforddaily.com/?p=1093287 While we can explore a wealth of material from our computers, we should seriously reconsider how much time we spend on them at home, in the office, and on the go for work, for correspondence, and for play. We don’t have to give up our screens entirely, but we should make sure the time we spend on them isn’t just for lack of something better to do. After all, we each have more than one eye for a reason. The world was not meant to be flat.

The post Screen time: When the world is flat appeared first on The Stanford Daily.

]]>
When I was little I had no depth perception.  Because one of my eyes was farsighted and the other nearsighted, almost nothing I looked at could be in focus in both.  My brain, unable to reconcile the difference, began to ignore the signals from my left eye.  If not for a pair of glasses and months of vision therapy, I would still see the world as though it were a photograph.

That ostensibly doesn’t matter for a large percentage of my time nowadays, which is spent staring at a wafer-thin computer screen with simulated shadows beneath the application windows.  Yet I’m convinced that eye strain isn’t the only consequence of whittling away the hours focusing on a backlit rectangle less than an arm’s length from my face.  Screen time comes at the cost of interaction with the physical world — at the price of sensory experience and associated benefits to learning.

Let’s start with vision.  Your brain uses many cues to determine how objects relate to each other.  But being able to perceive depth through binocular vision — the sense of depth that results from seeing an object from a different angle in each eye — is essential for anything from basketball to gardening.  We often take the ability for granted; something as simple as going down an escalator can be a frightening ordeal if you can’t judge where the steps start to drop.  (Believe me, I know.)

For children in kindergarten and early elementary school, those who performed better on tests for depth perception and visual-motor skills also did better in reading, spelling, writing and mathematics.  This suggests there is a connection between perceiving and interacting with the 3D world and understanding more abstract concepts.  Since depth perception — and related skills such as eye-hand coordination — are to an extent learned, then we may be introducing consequences down the road if we hamper the development of these skills by replacing our children’s Legos with iPads.

Screen time does more than eliminate binocular vision.  It removes taste and also smell, that underappreciated harbinger of memory and invisible mode of interpersonal communication.  Significantly, it eliminates haptic information, what you learn from touching something.  Especially given that the human brain is wired for visual and tactile processing, could we be doing ourselves a disservice by “flattening” formerly 3D activities?

In a recent talk at the Stanford School of Medicine, Dr. Temple Grandin suggested schools reimplement workshop classes with hands-on activities like woodworking or sewing.  For many people, the act of using one’s hands to produce a physical result is a more effective way to learn than traditional lectures.  In fact, researchers are investigating tangible user interfaces that allow users to navigate and analyze digital data through tactile media such as sand, blocks or liquid.  There is even evidence that reading paper books has benefits over reading e-books: The layout and structure of bound books helps readers generate mental maps of their content, while the act of physically turning pages contributes to a reader’s sense that he or she is in control.

Screen time may be even more prevalent outside the classroom.  Americans buy fewer toys and board games for their children as apps and videogames consume more of the young generation’s attention.  Computer chess is familiar to many of us, and recently I learned there is a DVD game version of Candyland.  I won’t even get started on television and movies; despite what advertisers may claim, 3D films are no substitute for reality.

Don’t get me wrong — the digital revolution and the accompanying proliferation of screens has brought more information to more people’s fingertips than at any point in history.  Nonetheless, we shouldn’t forget that even computers have hardware, that we have five senses with which to explore the world beyond our desks.  Requiring less screen time for work and reducing our consumption of on-screen entertainment could supplement efforts to encourage children and adults to exercise or seek in-person social interaction, both key to physical and emotional health.

While we can explore a wealth of material from our computers, we should seriously reconsider how much time we spend on them at home, in the office and on the go for work, for correspondence and for play.  We don’t have to give up our screens entirely, but we should make sure the time we spend on them isn’t just for lack of something better to do.  After all, we each have more than one eye for a reason.  The world was not meant to be flat.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu

The post Screen time: When the world is flat appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2015/01/05/screen-time-when-the-world-is-flat/feed/ 0 1093287
Toward a sustainable, bio-inspired future https://stanforddaily.com/2014/11/10/toward-a-sustainable-bio-inspired-future/ https://stanforddaily.com/2014/11/10/toward-a-sustainable-bio-inspired-future/#respond Tue, 11 Nov 2014 02:43:27 +0000 https://stanforddaily.com/?p=1091796 As resource shortages and the energy crisis loom near, biomimicry provides countless opportunities for improving the efficiency and effectiveness of our technologies. If we want to move toward a greener future, we should look to the green that’s already around us.

The post Toward a sustainable, bio-inspired future appeared first on The Stanford Daily.

]]>
A dog. A burr. A trip to the Alps. The story of George de Mestral’s invention of Velcro reads like the quirky younger cousin of the popular myth about Newton and his apple, in which a fateful encounter with a piece of plant matter results in a wildly successful insight. While Newton’s theory of gravity was more scientifically fundamental than de Mestral’s observations of the hooks on the burdock burr, the Swiss engineer’s subsequent innovation represents a promising paradigm in design thinking – one that could lead a revolution in sustainable technologies.

Biomimicry, biomimetics or bio-inspired design is the process of drawing inspiration from nature to solve problems in engineering. Applications range from products to algorithms to manufacturing processes and often provide major improvements over existing technologies.

Consider the termite mound, which maintains a constant internal temperature around 30°C in an environment where daily temperatures vary by about 50°C. African architect Mark Pearce designed the Eastgate Centre in Zimbabwe and the Council House 2 building in Australia to mimic the vaguely conical shape and ventilation system of the mounds. Overall, the passive climate control systems in the buildings reduce energy and water use by 70-90 percent.

In contrast to human technologies, which require periodic replacement and repair, many natural structures maintain themselves. For example, sea urchins have self-sharpening teeth that allow them to chew through rock. If we could replicate the alternating crystalline and organic layers that make up the structure of these teeth, we could build nanoscale needles that stay pointy even with repeated use. Perhaps eventually we could also make larger-scale tools that don’t require manual sharpening. In the distant future we might even develop machines that repair themselves, similar to the way skin heals over a wound.

And just imagine if we could mimic not only natural structures, but natural manufacturing processes – ones that use common elements instead of rare ones and require far less energy than our current procedures. If a sponge can produce optical fibers at the temperatures found on the ocean floor, surely we we can manufacture fiber-optic communication cables without heating gas to a few thousand degrees.

But designing technologies and methodologies is only the first step toward a more sustainable biomimetic future.  What we need is a push to industry to implement these new ideas.  Of course, that’s easier said than done. Jay Harman, author of “The Shark’s Paintbrush,” encountered difficulty convincing companies to adopt his whirlpool-inspired spiral fan, even though it was 75 percent more efficient than traditional models. As a result, a possible leap in the efficiency of refrigerators, automotive cooling systems and other commonplace technologies never got off the ground.

Part of the problem is that we often don’t recognize that technologies we’ve “mastered” can be improved in ways we aren’t used to considering. It’s one thing to get people excited about novel gadgets like Google Glass and Tesla cars, or even developments in solar power and biofuel. It’s much harder to convince people that reinventing the wheel might be a good idea.

But that’s exactly what we need to do. And when the evidence stacks up that “unconventional” designs like Harman’s fan are, in fact, drastically better than existing models, we can’t fail to adopt them just because we’re too stubborn to acknowledge their superiority. Besides, when it comes to biomimicry, Mother Nature has had a few billion more years of prototyping experience than we have.

In the long term, we should educate the next generation of engineers to consider bio-inspired design. Even if students don’t go on to practice biomimicry, at least they will have experience with it – which goes a long way toward establishing its perceived legitimacy in industry. Academic institutes for biomimetics, including Harvard’s Wyss Institute of Biologically Inspired Engineering and Georgia Tech’s Center for Biologically Inspired Design, are an excellent start. Stanford hasn’t so explicitly endorsed biomimicry, although the Bio-X program and last spring’s “d.nature” pop-up class suggest that it’s on the radar. With a little more effort – a course in biomimicry, a seminar or even just a lecture in introductory engineering classes – we’d be ideally situated to help lead the coming biomimetic revolution.

As resource shortages and the energy crisis loom near, biomimicry provides countless opportunities for improving the efficiency and effectiveness of our technologies. If we want to move toward a greener future, we should look to the green that’s already around us.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Toward a sustainable, bio-inspired future appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/11/10/toward-a-sustainable-bio-inspired-future/feed/ 0 1091796
Sentience and sentiment https://stanforddaily.com/2014/10/27/sentience-and-sentiment/ https://stanforddaily.com/2014/10/27/sentience-and-sentiment/#respond Tue, 28 Oct 2014 04:17:00 +0000 https://stanforddaily.com/?p=1090739 Is it so far-fetched, then, to imagine that emotions might emerge in sentient A.I. even without our deliberately putting them there? While a robot may never experience physical emotional cues such as an elevated heartbeat, cold sweat, or butterflies in the stomach, its mental perceptions of feeling could still exist. If emotions are indeed akin to an instantaneous summing up of inputs, then could a conscious machine be said to experience emotions as its internal processes produce new conclusions?

The post Sentience and sentiment appeared first on The Stanford Daily.

]]>
“If being human is not simply a matter of being born flesh and blood, if it is instead a way of thinking, acting, and feeling, then I am hopeful that one day I will discover my own humanity.”

Thus spoke one of “Star Trek’s” most memorable (and endearing) characters, the android Data from “Star Trek: The Next Generation.” In contrast to his stoic predecessor, Mr. Spock, Data seeks to understand and experience emotions — which in the show and elsewhere are nearly always placed in opposition to logic.

Yet emotions and logic are not so at odds as they are often treated. On an evolutionary level they are fundamentally interdependent, a fact that has significant implications for how feelings might manifest in artificial intelligences. To understand this, we need to investigate the logic behind emotion and the emotional guide for logic.

But first: What exactly are emotions? It’s pretty much impossible to explain what they feel like in any non-self-referential form. What they are is at least an approachable question.

I once heard an excellent description of emotions as the world’s fastest conclusion-drawing machine: Your brain synthesizes a huge amount of information into an impression, a feeling. Emotions may not be the most accurate method to analyze circumstances, but they are an efficient way to assess — and react to — many situations.

Inasmuch as emotions involve integrating all kinds of information including physical perceptions and past experiences, they are an emergent property of our neural system. Theories that brain regions are each individually responsible for generating different emotions — i.e., one region of your brain makes you happy, one makes you angry and so on — simply don’t hold up. A more scientifically promising model is that brain regions are all networked together, like computers over the internet, and that different regions work together to produce different emotions.

Is it so far-fetched, then, to imagine that emotions might emerge in sentient A.I. even without our deliberately putting them there? While a robot may never experience physical emotional cues such as an elevated heartbeat, cold sweat or butterflies in the stomach, its mental perceptions of feeling could still exist. If emotions are indeed akin to an instantaneous summing up of inputs, then could a conscious machine be said to experience emotions as its internal processes produce new conclusions?

Depending on the inputs, the answer may not be meaningful. What it “feels like” to calculate the square root of two is not very relevant to any emotions we classically recognize as human.

A better question is: What would an A.I. do with the ability to process the kind of information that humans care about — like who is friend and foe, what is polite in a certain situation and what do you do to have fun?

After all, from an evolutionary standpoint, emotions are a form of motivating behaviors: Fear induces prey to flee predators, affection facilitates group cohesion, happiness rewards beneficial activities. The logic behind emotions is that they push individuals to do what needs to be done to survive and reproduce. For our eventual A.I., if we design their “brains” to be more like modern computer chips, we may directly program them to have a goal, similar to how the purpose of ad-blocking software is to block ads. If instead we model their “brains” off biological ones, we may exercise an artificial selection process whereby we continually experiment with designs for A.I. and only keep ones that behave the way we want. In this way, we might indirectly choose characteristics (e.g., processing speed, emotional awareness) that favor a certain outcome (e.g., efficiency, fondness for humans) — a bit like how we domesticated wolves into dogs.

In either case, we must consider what goals a rational A.I. possesses. Too specific, and the system will break down once the objectives are achieved: A robot whose job is to construct one specific building will stop working after that building is done. Too general, and too many methods will become acceptable: imagine a robot programmed as a bodyguard who concludes that the best way to protect its owner is to lock her into a room where no one can threaten her. Logic itself, after all, is just a means. To use it, you need at least a starting point (from which to trace implications: “if x, then y”) and preferably also an endpoint (to which you can scope out a reasonable path). Emotions provide us with starting points. Moral values supply the endpoints. Together, they motivate action.

Thus in terms of developing artificial intelligences, especially if they derive logical and accurate conclusions — as our computers currently and predictably do — we need to be cognizant of how emotions and values manifest in their minds. What will motivate them to act? If we think about the characteristics we emphasize when we design them, perhaps we can piece together their emergent feelings in the same way that we evolutionarily explain human emotions in the context of survival.

In the meantime, we’ll have to consider whether a trend toward more interactive, human-friendly software and robots like Siri and Diego-san will produce a race of machines as flawed as we are, only smarter, stronger and more powerful.

Alternatively, we end up with the benign Data, whose programming sets him apart from his crewmates but whose struggle to be human makes him humanly relatable. We could stand to learn from his curiosity, his earnestness, his honesty — and his dedication to one day “discover [his] own humanity.”

“Until then,” he tells us, “I will continue learning, changing, growing and trying to become more than what I am.”

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Sentience and sentiment appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/10/27/sentience-and-sentiment/feed/ 0 1090739
Man-made humans https://stanforddaily.com/2014/10/13/manmade-humans/ https://stanforddaily.com/2014/10/13/manmade-humans/#comments Tue, 14 Oct 2014 03:31:10 +0000 https://stanforddaily.com/?p=1089666 I propose a Darwinian definition: more fit to survive. I don’t mean to say we should ruthlessly outcompete other living beings; instead I refer to survival in a universal societal sense. Humanity has a greater chance of dealing successfully with upcoming crises if the “superorganism” of society as a whole is adaptable.

The post Man-made humans appeared first on The Stanford Daily.

]]>
Related to the impact of brain stimulation techniques on the traits we value as a society, which I discussed in my last column, is the not-so-distant prospect of genetically handpicked children.  Often referred to as “designer” or “commodity” babies, such children are selected through genetic screening or genetic engineering to exhibit certain traits.

Historically, genetic screening processes such as pre-implantation genetic diagnosis (PGD) were used primarily to identify life-threatening genetic diseases. Within the past decade, however, PGD has begun to be used for gender selection – an illegal application in many countries, but legally permissible in the U.S. In 2009, a Los Angeles fertility institute announced that it hoped to one day offer customers the opportunity to use PGD for cosmetic purposes. And in 2013, 23andMe, a Mountain View-based company, patented a process that could match patients’ genetic profiles to genetic profiles of egg and sperm donors in order to maximize the likelihood that children express certain characteristics.

The prospect of screening humans to filter out ones that don’t look how we want them to is horrifying. Turning people into custom-order products is equally appalling – not least because of what message that sends them about who they ought to be. Regardless, genetically engineered children will become the societal norm step by step.  As that happens, we must understand the implications of what we’re trying to do.

“Make them better” is the easy answer with a not-so-easy caveat: How do you define “better?”

I propose a Darwinian definition: more fit to survive. I don’t mean to say we should ruthlessly outcompete other living beings; instead I refer to survival in a universal societal sense. Humanity has a greater chance of dealing successfully with upcoming crises if the “superorganism” of society as a whole is adaptable. Harmonious social functioning and the ability to develop solutions to new problems are necessary for long-term endurance. Open to interpretation are what qualities promote these abilities; I would argue for scientific understanding, creativity, compassion and motivation, among others.

The Darwinian definition, like any other definition, suffers from ethical uncertainties in terms of implementation, if indeed it’s even possible to implement. If we assume for the moment that genetically engineering children begins with companies such as 23andMe providing services to customers, a host of individual concerns arise. Will parents love a made-to-order child that, by chance, doesn’t possess the qualities they ordered? Will a child deliberately endowed with enhanced musical abilities feel unable to pursue a career in something other than music? Will a new form of discrimination arise against “natural-born” children, à la Gattaca?

Another concern is that genetic engineering will give people a way to enact their prejudices. If enough parents want children with certain traits – say a specific eye color or height – children without those traits might be singled out and could internalize feelings of unwantedness or, depending on the trait in question, inadequacy. Related repercussions could include a decrease in genetic diversity and other side effects similar to, for example, the current projected surplus of males in China, India and South Korea due to gender selection.

Availability poses another problem. If not everyone can afford to genetically modify their children, we might end up exacerbating existing social inequalities. In a dystopic view, an upper class of genetically modified superhumans one day rules over a lower class of natural-born humans.

While it might be prematurely pessimistic to conclude that genetic engineering will result in widespread populational homogenization or a new conception of the Master Race, it would be naive to claim the existence of genetically engineered humans will not alter social thinking or societal structure.

Yet those changes wouldn’t necessarily have to be negative. One Oxford professor argues that we have a moral obligation to genetically select our offspring in order to filter dangerous personality traits.  He believes the result will be a more peaceful, intelligent society.

His viewpoint touches the heart of the issue: Is it morally wrong to try to improve the human race? Objections to the prospect often hinge on the process: “breeding programs” violate reproductive rights, infanticide is murder, genetic experiments generate suffering when they fail. But if we had a reliable method of modifying the genome to produce more capable humans and we could guarantee a just and peaceful transition from a “natural-born” population to a “better” genetically engineered one, would we be wrong to do so?

If the answer is no – if we do decide there is a morally defensible ultimate goal for genetic engineering – the path there will be fraught with ethical pitfalls. I do not subscribe to the belief that the ends justify the means, especially given that the traits we decide to propagate to the next generation will literally change humanity. The steps we take should be considered both individually and in context. We should be worthy children to our parents and worthy parents to our children, however they – or we – may be.

Contact Mindy Perkins at mindylp ‘at’ stanford.edu.

The post Man-made humans appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/10/13/manmade-humans/feed/ 3 1089666
Thinking (handi)cap https://stanforddaily.com/2014/09/29/thinking-handicap/ https://stanforddaily.com/2014/09/29/thinking-handicap/#respond Tue, 30 Sep 2014 00:55:57 +0000 https://stanforddaily.com/?p=1088560 It’s Thursday night of finals week, spring quarter. Tomorrow night you have a math final and the evening after that you have physics — neither of which you’ve started studying for, thanks to the stubbornly low throughput of your heap allocator (which is due at midnight tonight with your last late day). To make matters […]

The post Thinking (handi)cap appeared first on The Stanford Daily.

]]>
It’s Thursday night of finals week, spring quarter. Tomorrow night you have a math final and the evening after that you have physics — neither of which you’ve started studying for, thanks to the stubbornly low throughput of your heap allocator (which is due at midnight tonight with your last late day). To make matters worse, you’ve depleted your stash of English breakfast tea, since Arrillaga predictably ran out of sachets this morning. After a moment’s deliberation, you open up your bottom desk drawer, where you keep your only partially legal last line of defense: an electrode-studded “thinking cap” that will improve your focus for the next few hours and boost your numerical cognition enough to teach you differential equations tomorrow before the exam.

This scenario could be reality for incoming freshmen within the next decade with the help of a technique called transcranial electrical stimulation (TES). In TES, a series of electrodes fitted to the scalp apply a small battery-driven current to the brain. At a basic level, the current changes the voltage across the membrane of the neurons to make them more excitable. The result is that the brain responds more strongly to stimuli, so tasks like learning are enhanced. In one study, a few bouts of TES during learning helped children with developmental dyscalculia — dyslexia for math — and improved their numerical cognition for up to six months.

TES doesn’t have to be restricted to those with learning disorders. It could theoretically be used as a cognitive enhancer, much the same way that drugs like modafinil and Ritalin — intended to treat ADHD — are used by healthy college students to improve attention span.

But should it be used? Returning to the opening scenario, is it cheating to use a cognitive enhancer if you are naturally unimpaired (whatever that may mean)?

Some scientists argue that, given the societal benefits of enhanced cognitive performance, the concept of cheating is irrelevant as long as everyone has equal access to technologies like TES. In this view, using TES is morally equivalent to sleeping enough and eating well: You are taking advantage of available resources to be healthy and productive; you are not passing off another person’s work as your own and your performance still requires self-discipline and dedication.

In practice, access to TES cannot be easily guaranteed for the entire population, and it may even widen the performance gap if intelligent individuals benefit more from its use.

The issue is most concerning because it relates directly to our concept of human worth. In targeting capabilities to enhance, we implicitly assign them value. Focusing on enhancing specific qualities such as mathematical prowess may devalue other qualities like emotional intelligence that are perceived as less marketable or productive.

Herein lies the danger: Do we understand enough about the way people think, how different skills are used, and how society as a whole is shaped by individuals to be able to discriminate between useful and useless skills?

On one level, we don’t, as evidenced by the continuing debate on the value of a liberal arts education in an increasingly technologically focused society. Yet on another level, we know very well which skills are useful and which useless.

The heart of the matter is the distinction between two types of skills: abilities and traits. Abilities are tied to pursuits: mathematical ability, artistic ability, linguistic ability. Traits enable pursuits: dedication, determination, responsibility. Traits are expressed as behaviors, such as work habits, and at least some of the neural connections governing them are learned. (That’s why the more you procrastinate, the harder it gets to stop procrastinating.)

If we use TES for cognitive enhancement, then we need to consciously target traits, not abilities. Rather than changing what individuals are good at, we would make individuals better equipped to do what they do. That way, we could maintain a population with diverse abilities to draw on in times of crisis, when no one can predict exactly what fields of expertise will contribute to the solution.

Which isn’t to say that individuals should not be well rounded; everyone should have a certain level of proficiency across the board. But if we’re going to tap human cognitive performance with commercial technology, we should value the skills that will allow people to excel — the skills they can transfer across fields.

There is a nontrivial chance that using TES to target traits will convert commendable skills like endurance and focus into commodities, or reclassify normal human cognitive function as a disease. It’s certainly odd to think that today’s brightest could be tomorrow’s handicapped if brain-stimulating technologies redefine what it means to be intelligent. Ideally we could minimize this risk as long as we see TES as augmenting proficiencies rather than compensating for deficiencies.

No regulations currently exist in the U.S. or EU regarding brain stimulation techniques. Any standards that are put in place will provide precedents for future regulation of human-enhancing technologies. Policy, however, is not equivalent to culture, and political lobbying and social rallying — while important — are not the only ways to instigate change. With regard to cognitive enhancers, the aggregate effect of individual uses will determine what skills we value and what role TES plays in enhancing them.

So think hard before you put on your thinking cap. It’ll be worth knowing that what you do — and how you do it — reflects what you believe in.

Contact Mindy Perkins at mindylp ‘at’ Stanford.edu.

The post Thinking (handi)cap appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/09/29/thinking-handicap/feed/ 0 1088560
The expectation of perfection https://stanforddaily.com/2014/08/18/the-expectation-of-perfection/ https://stanforddaily.com/2014/08/18/the-expectation-of-perfection/#respond Tue, 19 Aug 2014 00:44:00 +0000 https://stanforddaily.com/?p=1087573 Today, with the rapid spread of media through the Internet, it is easy to find examples of great people. Anyone with access to the web can listen to YouTube videos of the world’s most virtuosic musicians or follow skilled artists on deviantART or Tumblr. The talented, the intelligent, the accomplished and the beautiful are popularized […]

The post The expectation of perfection appeared first on The Stanford Daily.

]]>
Today, with the rapid spread of media through the Internet, it is easy to find examples of great people. Anyone with access to the web can listen to YouTube videos of the world’s most virtuosic musicians or follow skilled artists on deviantART or Tumblr. The talented, the intelligent, the accomplished and the beautiful are popularized in lists from TIME Magazine’s 100 Most Influential to countless articles on Buzzfeed.

How we react to the greats depends a good deal on our own moods, circumstances and personalities. Frequently we’re inspired, sometimes intimidated, often entertained, occasionally envious. While some people tend to be motivated by individuals they admire, others are more inclined to feel inadequate — a condition exacerbated when “adequacy” is unattainable.

A good number of us feel inferior when we compare ourselves to the superstars we find on the web — not necessarily because we have less potential than they do, but simply because much of what we see isn’t authentic. Many of our media idols are not real people. They are real people artificially edited into ideals. And when we compare ourselves to ideals, we will always come up short.

As long as we realize everything has been rigorously filtered or technologically doctored, shouldn’t we be able to distinguish realistic from unrealistic expectations?

Unfortunately, “no” is a strong contending answer. According to Daniel Kahneman in “Thinking, Fast and Slow,” the human brain is naturally gullible. Believing what you see or hear is the default state; disbelieving, or unlearning, requires conscious processing. In one experiment, two groups of people were given a series of nonsense statements and told that each statement was either true or false. One group of people was instructed to remember a series of digits as the statements were presented to them — in effect, distracting their conscious minds. Later, each participant was given a memory test to see how many of the statements they thought were true. Those told to remember digits as they saw the statements recalled many of the false statements as true.

What does this mean for recognizing ideals? Basically, if we are not consciously aware that something has been tampered with, as far as our brains are concerned, it’s reality. So if a model looks impossibly thin because half of her body weight has been removed in Photoshop, unless I make an effort to tell myself that her picture is unnatural, I may just believe she exists in such a skeletal state.

But just because I believe it doesn’t mean it will affect my body image. I still have to internalize the ideal that thin women are more attractive (and presumably feel the need to be attractive) in order to suffer the anxiety and decreased self-esteem that accompany a comparison of my own flawed form to the popsicle-stick woman on the web. It’s a vicious, self-reinforcing cycle, however, because media — including the Internet — play a significant role in what ideals people internalize. Although most studies have focused on women’s (dis)satisfaction with their bodies, I propose that individuals treat other ideals, ranging from artistic and academic achievements to concepts of happiness and “the good life,” in a similar way.

Even realizing my hypothetical model is fake might not help me, though, since research shows that people can and do internalize and compare themselves to impossible ideals. Therefore, I think it’s reasonable to theorize that inundation with unrealistic, inauthentic images may exacerbate poor self-esteem and emotional anxiety derived from social comparison or perfectionism. In other words, because our media is overflowing with airbrushed all-stars, we may have a higher chance of holding ludicrous standards that we mistakenly believe are attainable.

Of course, this argument presumes that we equate technologically modified versions with ideals, which is not always the case. Consider music. Automatic pitch correction ensures that slightly off-key notes can be slid into tune, while techniques for stitching together multiple recordings can correct larger blunders. By some definitions, the results are flawless. Yet, by other definitions, they are inhuman, lacking the charm of spontaneous nuance and divorced from the unpredictability of musical performance.

Movements are cropping up to subvert unrealistic ideals, including a recent wave of celebrity-posted unretouched photographs. Even without these welcome efforts, though, I think we would do well to recognize what expectations aren’t worth expecting of ourselves. It’s a fine line between striving to be all that you can and feeling that you will never be enough. For me, it helps to think in terms of progress, not accomplishment; to view failures not as failings but as opportunities. Ideals are not goals to be achieved. At best, they are guides, and some of them are not even worthy of that status.

For those times when we are dismayed by how little we think we have done, the Internet can be a minefield riddled with samples of the world’s best in every category. At our fingertips are the paragons of our aspirations, our ideals more idealized than ever.

But ideals are just that: ideals. We live in the messy, imperfect framework of reality, within which there is endless potential for improvement, development and growth. That, to me, is more inspiring than any “perfection.”

 

Contact Mindy Perkins at mindylp@stanford.edu.

The post The expectation of perfection appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/08/18/the-expectation-of-perfection/feed/ 0 1087573
Being There or Being Present? https://stanforddaily.com/2014/06/25/being-there-or-being-present/ https://stanforddaily.com/2014/06/25/being-there-or-being-present/#comments Thu, 26 Jun 2014 01:28:49 +0000 https://stanforddaily.com/?p=1086567 My father once told me that he loves zoos because “the more times you go, the higher the probability of seeing an animal engaging in an interesting activity.” That’s why I was so excited one morning when I spotted the jaguar pacing outside of its den — the big cat usually lounged out of sight […]

The post Being There or Being Present? appeared first on The Stanford Daily.

]]>
My father once told me that he loves zoos because “the more times you go, the higher the probability of seeing an animal engaging in an interesting activity.” That’s why I was so excited one morning when I spotted the jaguar pacing outside of its den — the big cat usually lounged out of sight in the privacy of its indoor enclosure. Now it stalked back and forth in the open, its glossy pelt rippling over the shifting twin peaks of its shoulder blades, its lithe body a single serpentine motion ending in the mesmerizing pendulum swing of its tail. I stared, mouth agape, for far longer than any potential prey item should admire its predator.

My wide-eyed reverie was interrupted as a boy shuffled in with his red ball cap askew, fiddling with a digital camera. He glanced at the jaguar through the viewfinder, snapped several pictures and exclaimed, “I wish it would sit still so I could take a good photo!”

I was stunned. Here was an animal that in years of zoo visits I’d never even seen, and now not only was it outside, it was demonstrating that it was alive, dangerous and beautiful. I resisted the urge to tell him the museum’s taxidermy exhibit might be more to his liking.

Several years later, I went to see the “Mona Lisa” at the Louvre. There is something bewitching and unnerving about that famous smile — an effect magnified when the portrait is viewed in person. It’s incredible to realize there is still life to that face even centuries after the death of its owner. Yet I was appalled by the number of people that jostled their way to the front, snapped a picture over the heads of other patrons and hustled out. How can you enjoy a painting if you don’t even look at it?

When people take hasty photographs of a great work of art, it can’t be because they think they’ve captured something unique. I also doubt it’s so that they can admire it more later — they probably never will if they’re not taking time for it now, and there are dozens of high-quality images of it just a click away on Google. What people want is the proof that they saw it, their very own sequence of pixels to remind themselves of their exploits or to tout to friends or post on Facebook as evidence that they were there.

But being there isn’t the same as being present.

Our fast-paced, sound-bite culture of texts and tweets shifts attention away from what we’re actually doing to what others think we’re doing. The way we express our inherent human desires to improve our self-images and share our lives with others is altered by the constant connectivity offered by smartphones and social media.

For example, a report in the New York Times noted the increase in vandalism in national parks in recent years, including pictures and names spray-painted onto famous formations such as Twin Owls in the Rocky Mountains. One possible cause may be the newfound ease with which photos are taken and shared. “Kilroy was here” is no longer a message for strangers who are also here but rather a way for Kilroy to boast to friends and acquaintances about where he’s been and what he’s done.

While most of us wouldn’t go so far as to spray-paint graffiti on a saguaro cactus, I do think we need to be cognizant of how much time we spend doing something versus documenting the fact that we’re doing it. Do outings with friends really need to be advertised on Facebook while they’re happening? Is it really necessary to tweet or check in right this instant?

One afternoon while studying abroad in Australia this past fall, I stood on a pier watching dozens of terns streaking over the sea. Now and again, one of them would separate from the others, fold its wings into an arrowhead shape and dive after one of the small silver fish schooling in the shallows — usually unsuccessfully. After a lucky bird emerged with its catch wriggling in its beak only to have a seagull swoop in and steal its prize, I overheard someone rueing what a great picture he could have taken.

Personally, I don’t regret the photos I didn’t take. That’s what memories are for. I view photographs as a way to remind me of what I’ve lived, not as something through which to live my life.

Then again, I’m a zoo-goer like my father. For others, perhaps the taxidermy exhibit is good enough.

 

Contact Mindy Perkins at mindylp@stanford.edu.

The post Being There or Being Present? appeared first on The Stanford Daily.

]]>
https://stanforddaily.com/2014/06/25/being-there-or-being-present/feed/ 2 1086567