C2GTalk: An interview with Sheila Jasanoff

How does society view solar radiation modification experiments?

18 March 2022

This interview was recorded on 19 January 2022 and is available with interpretation into 中文,  Español  and  Français.

It is important to see proposed solar radiation modification experiments in a wider social context, says Sheila Jasanoff, the Pforzheimer Professor of Science and Technology Studies at the Harvard Kennedy School, during aC2GTalk. People want to know who is doing the experiment, and what their intentions are – and it is important for scientists and engineers to recognise and address these concerns, and for governance to be built around that. 

Sheila Jasanoff is a leading expert on the role of science and technology in the law, politics, and policy of modern democracies, and her work offers fascinating insights into how society navigates emerging technologies, and how decision-makers assess evidence and expertise – which is extremely relevant to our governance conversations. 

She is the author or editor of more than 15 books, including ‘The Ethics of Invention’, and ‘Can Science Make Sense of Life’, has held distinguished appointments at leading universities around the world, and served on the board of the American Association for the Advancement of Science.  

Below are edited highlights from the full C2GTalk interview shown in the video above. Some answers have been edited for brevity and clarity.

The world is living through a pandemic, as we have noticed, and people are starting to see the impacts of climate change in their everyday lives.  You might imagine that this is a moment when society increasingly turns to scientists to understand and help navigate these challenges, yet in practice it isn’t so simple. Can you share with us some of your recent reflections on this?  How ready is today’s public to listen to and act upon scientists’ recommendations? 

Mark, it’s a fundamental question.  It has led me to wonder to what extent people ever turn to a thing called “science” or “scientists.”  People do turn to trustworthy sources of information; and, as we know through the Information Revolution of the last couple of decades, information is at your fingertips if you have a computer and access to the Internet.  Is that turning to science or is that a decision that you are going to turn to this or that source?  For me, someone in my kind of profession, there are certain “newspapers of record” that I look at.  Is that turning to science or am I turning to journalists, people like you, who are mediating information for me? I think people are desperate for information, but it’s not necessarily that they think of themselves as turning to science.  They may have some idea that science is back of the information, but that’s a more complicated relationship than a direct access by people to science. 

Of course when we say “turning to science” that opens a whole bunch of questions about what science is and what that means to people.  I suppose on a basic level science is one set of approaches to organizing information and finding out information and brings with it a certain set of practices and assumptions. Do you think that there is some sense in society as to what science is, what the scientific method is or approach, and how that qualifies the information they turn to? 

This again is a really interesting question.  I don’t think studies have been done to ask the public what they think science is.  Most of the surveys that have been done over decades now assume that scientists know what science is, and then they go and ask the public: “Do you understand this, that, or the other thing?”  So there have been lots of 20 Questions-type surveys asking, for instance: “Do you know whether human beings and dinosaurs occupied the Earth at the same time?” and it’s a right/wrong answer, but that is not a very good measure of what people think science is. 

Informally — because I haven’t actually seen studies about this — people do have an idea that there is a thing called the “scientific method.”  I’m sure many people would associate experiment with the idea of science.  People have a mental image of what being a scientist means. 

The question that has been asked over and over again is: “Do you trust science as an institution or scientists as kinds of professionals in society?”  In America, which has one of the highest levels of skepticism toward climate science and now toward vaccines as well, the documented level of support for and belief in science is very high and has stayed in the 80–90 percent range forever.  Judging by those kinds of indirect measures, I would say, yes, people do have a mental image that there is a thing called science and that it is to be trusted; and the reason for that is that they conduct experiments, are highly trained, have special knowledge, or what have you. 

Let’s bring this down to a specific example, which we in C2G are focusing on around the science and establishing knowledge around one set of approaches — or potential approaches, some scientists tell us, and not all agree of course — to tackling the risks of climate change; and this is solar radiation modification, solar radiation management, or solar geoengineering, essentially a set of approaches to reflect back some sunlight to lower the temperature.  There is one particularly controversial approach posited in which aerosols would be added to the stratosphere in order to reflect back some solar radiation, but this also brings new risks and uncertainties of its own, including issues of unequal impacts and so forth. Until now this idea has been largely confined to computer models, academic papers, and discussions between scientists, but recently a group from Harvard University proposed to study aspects of a potential delivery system in the skies above Sweden.  This engendered significant opposition from civil society groups, and the experiment was put on hold pending further investigation about the circumstances. One key aspect of opposition was the sense that it indicated a general direction, doing this experiment in itself set humankind on an uncertain and potentially dangerous path which could detract from focusing on reducing CO2 emissions or essentially set in place a set of circumstances which are hard to stop once begun.  To what extent is there evidence that an experiment like this — which of course could lead to conclusions either way if it’s scientifically genuine — could undermine other action and potentially create issues like a slippery slope towards a certain outcome? 

Before I address that question directly, I just want to point out that we have moved with this set of questions far away from a generalized trust in science. We are talking now about engineering, and people don’t think of engineering as science.  Engineering has both good connotations and bad connotations. 

We are also talking about management, solar radiation “management.”  Terms like “management” and “engineering” ring differently with people from the term “science,” so it should be pointed out again that it is not a sort of question about “Do people trust the scientific method?” so much as now we are veering into “Do they trust people who say they are going to do things with the very ambient atmosphere in which we all live?” and that is a different kind of scenario. 

With the questions about “What are the sources of worry and concern about this particular form of intervention?” I think again maybe we need to disaggregate a bit, take apart some of the different strands that are going on in this case.  Probably you would like to get into each of these a little bit differently. 

But as you mentioned, it is a Harvard group — I myself teach at Harvard University — but the study in question is being done in Sweden; so from the beginning there is this question of “Who is the ‘we’ that is carrying things out?” and “What is the chain, the concatenation, one might say, the connection of beliefs that would make you decide, ‘Yes, this is something worth doing?’” 

There are two pieces of it.  One is that you really have to believe that the climate threat is sufficiently grave that all avenues ought to be tried, and the second is that you have to believe the particular boundaries that the scientists are laying down.  So this question of “Is it only a first step?” and “No, we don’t really mean to do anything” — why would you be doing this if you didn’t actually mean to be doing something? 

I think people notice that there is a sort of commonsensical contradiction here: You wouldn’t be doing this in the first place if you didn’t have the intention to try it out at a different level, so maybe it would be more honest to say not “Oh, no, this has nothing to do with chemicals and it’s just a prelim, it’s got no bearing on what comes next, and we’re going to stop and assess,” but instead to admit what it is, that this is a necessary first step and it’s creating a preventive, precautionary regime in and of itself.   

I think there are contradictions in the ways in which scientists themselves present their work, and people are very good at picking up contradictions, even if they don’t understand mathematical probabilities. 

So do you think in the sense of “Why would you do this if you didn’t at some stage intend to try it?” or  Is it not possible that an experiment like this could basically determine that this will never be possible, or is that just not how it works? 

I think it is very well possible that the negative results would drive people away from the thing. 

Where are analogies?  We know that pharmaceutical companies, for instance, have an array of drug possibilities, molecular libraries, and they are constantly testing them out; and if they find that the molecules that held some promise are not behaving in the ways that they imagined, they shelve those and don’t proceed with them.  There are even other kinds of examples. 

From the 20th century, the example of DDT is extremely interesting.  It was discovered as a molecule that was relatively inert.  It was an organic chemical compound.  It was put on the shelf, and for decades nobody did anything with it. 

Then, in the midcentury, in the middle of war, it was discovered that DDT actually has beneficial properties.  In particular, it was very good at attacking the kinds of bacterial diseases that were infecting the Allied soldiers.  It was then picked up as a medicine but then found to be an effective pesticide.  Then it was sprayed widely because people latched onto this one beneficial aspect.  In particular, it proved to be extremely efficacious against malaria. 

But people had not noticed that there were ecological implications; so, suddenly, starting in the 1960s and with Rachel Carson’s famous book, Silent Spring, DDT became Public Enemy No. 1 for environmentalists.  Now it is kind of anathema because it is being targeted as possibly a human carcinogen, for which the evidence is perhaps not that secure, but it is definitely a persistent chemical compound. 

So, in its lifetime of about 100 years now, DDT has gone through these different kinds of articulations, and people are aware of that.  What does that represent?  That science is partial, that science is provisional, and that science has conclusions that seem very trustworthy in the moment?  But those conclusions are always targeted to specific ends and specific uses, and those uses may not be the only ones, those effects and those implications may not be the only ones. 

These are, I think, some of the things that even highly educated people think about when they think about an early experiment.  Yes, you can stop, but maybe not if you have the framing wrong and were not looking at side effects.  Maybe the dangers lie somewhere other than the tunnel through which you were looking because of your disciplinary competence and your proclivities. 

One of the elements of opposition from various parties is that this is a technological approach, somehow out of tune with nature: “Technology got us into this mess.  Why are we using technology to get us out of it?”  At the same time, some proponents of exploring solar geoengineering draw analogies with nature as well, “It’s like a volcano,” and so forth. 

Could you perhaps tell us a little bit about how you see evolving attitudes to the relationship between nature and technology and how that informs the way people would interpret a proposed experiment like this? 

At that level of abstraction — “Why are you using technology to correct something that technology made in the first place?” — that is probably not analytically very useful, even though it is rhetorically quite powerful, so we are not going to stop people saying those things. 

When I wrote my book that you kindly mentioned, Ethics of Invention, I was suggesting that we need to look at what technology is in the first place and sort out that it has a material dimension, that is, we are bending the world to do our will in certain ways, but it also has a human and social dimension.  The going definition of technology is “a tool that helps you accomplish things that you wish to accomplish,” but that means the human and the social are in there from the start because it’s not the materiality that wishes to be put to a certain use.  We humans come along and we say, “This is how we want to use it,” so purposes and stuff go hand in hand together. 

I think the abstract statement “Why are you using technology to intervene?” has buried within it this double dimension: Who is making this decision that these are the uses that we want to put the technology to, and are they pure of heart, are they unbiased, where is their frame of reference coming from, and have they thought about us, have they thought about the ways that we live, have they thought about the displacement and the disruptions that often come out of technology?  I fear that, for hundreds of years, ever since the beginning of the Industrial Revolution, the people who are proponents of technology have not always thought long and hard about the people whose lives are going to be affected by the technology. 

We train people in something that we call STEM — science, technology, engineering, and medicine.  It’s a fact of life that in major universities around the world STEM is taking over from the humanities and the so-called “soft” human sciences; but that means that we are becoming more and more adept at thinking about the material world and less and less adept at thinking about people and about language, religion, and belief systems, all these things that the humanities used to do.  So it is not surprising that the technologists lose sight of the way that people are thinking about their own complex worlds and thinking about interventions that are made into them. 

To cycle back, I think that the people who want to have the discussion about responsible uses of technology have to keep in mind that technology is itself a kind of portmanteau concept.  It has a social dimension built into it, and it has a material, engineering, and scientific dimension built into it; and losing sight of that complexity, I think, can get proponents of technology into a lot of trouble because they just are not thinking in the same frames of reference as the people who are looking at the technology “landing from the sky,” as it were. 

Certainly, in the case of the experiment I cited, one key stated part of the opposition was insufficient consultation, the sense that here is an approach being imposed by one privileged group upon others.  One might also question to what degree those opposing it were also a privileged group of sorts. 

There is a sense in which science — as far as I understand it — has evolved to be a more sort of consensual process precisely to address some of these power dynamics.  You have lots of different people looking into problems from lots of different points of view, and presumably you can come to some kind of consensual position which reflects these different power dynamics.  At the same time, I don’t think people trust that that is the case necessarily. 

I am wondering to what extent — we certainly saw in this case people begin to say, “Oh, we need to consult, we need governance,” and all the rest of it — are you beginning to see groups like this Harvard group fundamentally wire in this consultation, addressing these power dynamics within an experiment, or to what extent do you think there is still a sense in which we’re just there to get the facts, and then other people can have the discussion about the power dynamics? 

I think that the people who are pushing for solar radiation management or indeed any massive technological solutions to climate change are not naïve people.  They are quite interdisciplinary in their outlook, and they have many ways of talking to other people. 

On the other hand, forgive my saying so, but with engineering, and particularly with physics and the hard sciences, there does come a kind of arrogance.  It is The Two Cultures kind of arrogance going back to C.P. Snow decades ago, that the scientists think that they can do the softer stuff, the humanities and what people need, very well, whereas the humanists cannot do the things that they do, the calculation and the engineering, at all well; and therefore a thoughtful engineer is a better judge of where humanity ought to sit and where humanity ought to go than even the most erudite and esteemed philosopher on the planet. 

That is an occupational hazard.  It is affecting places of higher learning, like my university, where people are voting with their feet. 

But, sitting where I sit, at the nexus of science, technology, and society, I also see a huge number of STEM students who realize at some point that they don’t understand this complex machinery that is society and that it actually takes incredible immersion and a different kind of thought to get at the bottom of what people feel and think and how they behave and react and also the institutional interfaces — how should people be involved? 

I think it is not the case that engineers and scientists, especially the ones who are proponents of solar geoengineering or any of these other technological interventions, are naïve and that they don’t get that people are out there.  The ones I know personally or have encountered in my professional life are very sophisticated people.  I do think that they don’t have a sufficient understanding of the complexity of the experiential system out of which people come when they are looking at these kinds of interventions.  They don’t understand the role of memory, for instance, and the ways in which past experience affects one’s understanding of the future, or if they do, they tend to turn that into a science. 

This is a little bit of a pet peeve of mine.  When human beings deviate from what the scientists and engineers want to tell them, the reaction within the behavioral sciences has been to say: “Well, there must be something the matter with your brain.  There must be these biases that you are inbuilt to forget the distant past and only remember the recent experience,” or “You exaggerate the things that you yourself have experienced at the expense of the statistical knowledge that you ought to have,” and, “Look, you are more afraid of flying in a plane when planes are x percent safer than riding in your car, and you ride in your car every day.” 

There has been this tendency to depreciate the forms of knowledge that people bring to the table, and that is not doing anybody a favor.  That is actually, I think, contrary to the very idea of democracy, that you have to take people and their understanding of politics and power and work with that, not say, “This is an illegitimate understanding” from the start. 

Let’s bring this down to decisions faced by policymakers, a broad idea, but basically people helping to craft policies, rules, governance, regulations, and investments in liberal democracies.  On the one hand you have these scientists, you have the opponents, you have all sorts of dynamics.  How do you begin to go about weighing up all these different positions on something especially as potentially existential as whether we need solar geoengineering to protect us from catastrophic collapse?  How do you go about doing that?  What tools do democracies have?  Do they have the tools to navigate this kind of stuff successfully, assess risks intelligently?  Do they have the ethical frameworks to tackle such huge problems in the Anthropocene or not?  Do we need to build new ones? 

Again, the short answer is that we never have the tools to achieve things on the scale on which they need to be achieved or tackled. 

In the interests of full disclosure, I should say that just this weekend I signed onto an open letter that is calling on the international community to observe a number of bans on technological development.  I thought long and hard.  I don’t easily sign onto things, and I don’t typically sign onto things that are committee-driven; but in this case the article that appeared in WIREs and the attached open letter are calling for a strengthening of institutional resources in the international community. 

Even if one is skeptical about the capacity, if such a thing as the “international community” even exists, even if you’re skeptical about those things, and necessarily at my stage of life I have a lot of skepticism about these things, nevertheless the principle that you can agree on is that one nationally based or even two or three leading scientific and technological institutions are not equipped to think about the ramifications — what is it that people worry about and should worry about — that the governance structure needs to be developed.  This is a direct answer to your question: Where are the forums where these things are going to be discussed and realized? 

I will make one analogy, which is not in the climate domain but in another domain that I work in a lot, and this is biotechnology, genetics, and genomics.  We have developed, as you know, in recent years this tool called Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), which is about gene editing, and with that we have the potential to alter the way humans reproduce and the characteristics that humans have.  This is not a statement I am making; this is a statement the scientists themselves are making. 

For instance, Jennifer Doudna, who won the Nobel Prize a couple of years ago for her role in the discovery of this CRISPR technology, wrote a book that was in a sense her Nobel book called A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution.  That is a very extensive and proud claim by science about creation itself being altered and switched around. 

Solar geoengineering partakes of some of the same kind of flavor.  If you look at what the deliberative bodies are, the forums where these issues are being deliberated, it’s not coming from the bottom up.  So it is violating the basic principle that “We, the people” are determining how we should be governed. 

One can say lots and lots of things against that idea: Who is “the people?”  Is there a global polity?  Who is supposed to come together?  Where is the constitutionalism?  But in my work, what I am trying to do is point out that the fragments are there, that there are areas in which we have at least the rudiments of things — institutions, procedures, and so forth — that we could seek to mobilize; and to some degree that thinking led me to sign onto this document that is calling for in effect a moratorium on certain ways of going forward.  These include things ranging from intellectual property protection for the technologies to questions of deliberation and who should be included in the kinds of debates that we’re having.  

This is a call that is coming out of a largely European-based group in the Netherlands, which has been a prominent country in principles and philosophies of global governance, being both environmentally challenged — if a rich country is going to face the consequences of sea level rise, it is the Netherlands.  Even the name of the country, “Nether,” that is, low-lying, and a country that has essentially built itself out of rescuing itself from oceans is of course at the forefront of this kind of thinking.  It comes largely out of a group from Utrecht, but this is calling for something stronger than a moratorium.  It is actually calling for a ban on certain kinds of developments until the global governance structures develop. 

It’s a problematic position obviously because it both presupposes the kind of thing that it is calling into being, and it is calling into being a kind of thing.  Who is going to impose this ban if there isn’t an authority structure that is there in the first place?  But these are principles.  They lay down certain notions about ought-ness in the world: What should we be thinking about and doing? 

I think that is important.  It’s important for people who are observing these developments from the side but are steeped in institutional thinking, legal thinking, and ethical thinking to say that there is a series of normative ideas that should be developing hand in hand, side by side, with the technological developments; and if there is to be a lag, maybe the technology is the thing that should be lagging while the ethical norms are developing. 

One of the things it talks about is “posing an unacceptable risk if ever implemented as part of future climate policy” and so forth.  Is there a concurrent risk on the other side of shutting off exploration into an avenue that may, according to some, be the only way to avoid hitting certain overshoots with catastrophic results?  I am just wondering how you then balance those two risks. 

The two risks that you mention —the one of shutting off research that to some degree would ensure that we have the tools in hand if and when we need them, or on the other hand moving ahead too fast and failing to consider all the ramifications in the ways that we have been discussing already — of course those are always risks, but think about what science has done institutionally to guard against that.  Science has said: “There is a thing called ‘pure research,’ and that is just research, and it has no bearing on anything else, and therefore we should be allowed to do that because it is just developing the tools in order to have them ready.” 

As we have already discussed, I think that doesn’t make any sense because we are doing that so-called “pure science” purely for the sake of having those tools.  So why not be honest and say, “Okay, we are developing these tools,” but that means we should, hand in hand with that, have the regulatory structures, the policy structures, the accountability mechanisms that we draw on to some extent? 

Here the micro details of scientific practice are not encouraging.  If you look at governance discussions that have happened in my two areas — climate governance and genetics and genomics governance — and you look at who is at the table, you will find a profound imbalance between the caliber of the scientists and engineers who are there and the caliber of all of the rest. 

It isn’t an equal conversation.  It isn’t equal money. A few years ago, the National Academies of Science, Engineering, and Medicine in America said that hundreds of millions of dollars should be set aside for funding for this kind of research.  They did not say that even 3 percent of that — I am picking 3 percent because this was the amount that J.D. Watson set aside for bioethics at the beginning of the Human Genome Project — even this puny amount of additional money should be set aside for the leading thinkers who are in a position to talk about these issues and that they should be included at the table, that an advisory board must have them, and that there must be time given to explore these questions. 

Why not?  If you are going to save humanity, with all due respect, you shouldn’t just go out into the desert and say, “I ate locusts for 40 years, and therefore I am entitled to govern you.”  We have advanced in 2500 years of thinking about democracy that there are better and worse ways of engaging with people. 

What would a successful outcome to the signature that you gave look like to you, let’s say over the next year or so?  What would make you think that this had the impact that you were looking for when you signed the letter? 

The first thing that one might hope for is greater transparency and greater communication between groups.  Whatever else that the Stratospheric Controlled Perturbation Experiment (SCoPEx) project did, it did nobody any favors.  Everybody was left annoyed, embarrassed, and holding the bag in some sense.  So what happens?  Do the scientists try to find a context in which the opposition will be different? 

I am reminded a little bit about what I learned when I went to Ithaca, New York, the place where Cornell University is located.  It’s in the middle of nowhere.  It’s a very beautiful location, it’s a major research university, and the local power plant had been thinking about converting its coal-fired power plant into a nuclear power plant; and they hadn’t counted on the fact that they were sitting cheek by jowl with one of the most educated and most environmentally aware expert communities in America, and they encountered massive opposition because the ecologists at Cornell and the citizenry in Ithaca mobilized.  So there never was a conversion into this nuclear power plant. 

What happens is that these kinds of technologies, where people can mobilize, where people have the wherewithal and the resources, end up somewhere else.  This is something that I worry about a great deal.  If we can’t do it in Sweden — “Sorry, oops, made a mistake, didn’t consult the Sámi” —because as you intimated earlier, they are privileged — they have had long years of mobilizing and they know that the Arctic is the coalmine and themselves the sparrows in it, so they are privileged advocates — then we go to Mexico instead?  Is that really the kind of global world that we want — find a government, any government, that is willing to put you up and conduct the experiment there?  Those are some of the things that I worry about. 

Forum shopping is an idea in the law.  If you live in a federal system, you encounter it very early in your legal training, and forum shopping means that people gravitate to the place where they have the easiest time. 

In America corporations are incorporated in Delaware because that state has made the laxest laws for incorporation of any.  You can do that, and it is called a “race to the bottom,” and all of our colleagues in this business understand race to the bottom extremely well, but do they observe it in their own practices?  This is the sort of discussion that one should be having and without the arrogance quotient lying in the places where it lies, that there is a de facto “plus greatness” imposed on political thinking by scientists as opposed to thinking about science by people who are living in the world. 

I would like to finish on a personal note, if I may.  You have already used the words “I am worried about” a few times, and I noted in one recent interview you said: “We should always feel worried all the time.  It keeps us on our toes.” There is a lot of thinking right now about eco-anxiety and emotions that are driving various actions and wield a powerful influence over how people think about climate science and what they need to do.  How do you personally navigate this maze of anxiety, climate grief, and yet balance that with certainly the need to be useful, to maintain agency, hope, and some kind of impact on where all of this is going?  What do you turn to to help you keep that alive while not blanking out the complexity and difficulty of this problem, and what advice can you give to others struggling with that issue? 

Thanks for ending on a personal note, Mark. 

As far as “You should always be worried all the time” is concerned, people like me are used to dealing with people like you and were brought up in the sound bite culture so we do say things that are going to carry in that way, but it is a very real question because many people are in fact terrified. 

The students that I teach now, people who are 21, 22 years old, think about what they grew up with.  They grew up in the year of 9/11.  They grew up with the War on Terror.  They have lived through a financial crisis.  Of course they were very young, but their families were affected, and then the climate threat looming over them, and now coronavirus.  I find that the young, the 18-to-22-year-olds on average, whom I teach are much more anxious than I was at their age.  I didn’t know what was going to happen with my life, but I also had a que sera sera attitude that somehow I would navigate it and things would end up all right more or less. 

I think that young people today are much more worried: Will they get jobs?  Will they be successful at them?  Will their world be overwhelmed by forces completely outside their control?  What about these unwieldy institutions that Greta Thunberg has in her sights?  There is a kind of mobilization of youth that is different from what I have seen in the past, but that is also where the hope is. 

To me personally the hope lies in teaching, which is my profession, and I tend to think of it as being at the center of a kind of fractal formation.  I have seen students to whom I have taught some ideas go out into the world and do all manner of things, and I think that this has to be a coherent enterprise, that this is not one thing.  I don’t have power to change the world, and I am very skeptical of people who apply to my school, the Kennedy School of Government at Harvard, which does public policy, saying, “I have this formula for bettering the world.” 

Be real.  Be modest.  You are only a cog in a very, very gigantic piece of machinery, but some cogs do have teeth that latch onto other things, and you can set a very profound piece of machinery going by training minds and people.  That is where I get my hope, that the human imagination seems to be bottomless, that we find new ideas, we come forward with things. 

It doesn’t always lead to a univocally better world.  We have had the Information Revolution, and I am sure it was not in the minds of all of those revolutionaries that, to go by Oxfam’s figures, ten people in the world would be twice as rich this year as they were last year while 100 million additional people have fallen below the poverty line.  So it is not a good world, but it is a changing world, and I see myself as sitting in one of those centers of change, able to speak and write with somewhat reasonable freedom and with people like you to talk to that make my life worthwhile. 

You may also find these resources interesting

Share This