Engineering improvements in surgical technologies

peter.bmp

Dr Pete Culmer is a Senior Translational Research Fellow in the School of Mechanical Engineering, University of Leeds (UK). He has a background in medical engineering, with a PhD and subsequent post-doc work developing technology for rehabilitation assessments and interventions. He was awarded his current position, funded by the Biomedical Health Research Centre (BHRC), in 2010 and works with a growing team of researchers including engineers, surgeons and psychologists, conducting research in Surgical Technologies._

I’m at a large white console that wouldn’t be out of place in a games arcade, staring into a 3D display and carefully manoeuvring two hand-held controllers. Across the room, the other half of the surgical robot looms over the operating table, its arms mirroring my movements. It gives me a helping hand, ironing out the slight shake in my hands and scaling things so the small instruments it holds move more delicately than I could ever manage on my own. I’m trying to tie off a knot, yet despite all this technological help I mess up, miss the loop of thread and instead plunge the needle into the soft mass beneath. Oops…

It sounds like science fiction, but this robot system, the da Vinci, is widely used for minimally invasive surgery in healthcare systems around the world. This one is in the heart of Leeds General Infirmary where I’m sitting with colleagues, currently laughing at my lack of surgical prowess. Luckily this is just a practice using silicon models rather than people and I’m an engineer, not a surgeon.

It might seem unusual for an engineer, but this is part of my job in the Surgical Technologies research group at the University of Leeds. The group, led by Anne Neville (Prof. of Engineering and next up at the da Vinci’s controls) and David Jayne (Prof. of Surgery, watching on amused), focuses on developing new technology to improve modern surgery, with a particular emphasis on laparoscopy (minimally invasive surgery (MIS) on organs such as the bowel within the abdomen). For engineers it’s a challenging and fascinating task, but with systems like the da Vinci already in use, is new technology still necessary and beneficial? Understanding this question takes clinical expertise and experience and this is why our group comprises both surgeons and engineers working closely together. The answer, by the way, is a definite ‘yes’; laparoscopic surgery is sometimes described as being like trying to tie your shoelaces using a pair of long chopsticks….we need to give surgeons all the help we can to improve this situation.

As a researcher I’m fortunate in having a 5 year fellowship position which has been incredibly valuable in helping me establish a career in academia. It provides me with the opportunities, resources and crucially the time to develop my own research. My interests focus on developing ‘smart’ surgical tools that integrate sensors, data analysis and feedback systems to improve the surgeon’s operating experience. But there’s way too much work for one person alone so a key part of my job involves developing our research group by working with colleagues to obtain funding for new PhD students and post-docs. This gives us more hands on deck but also a wider set of skills to better tackle the multidisciplinary work, from robotics specialists to trainee surgeons with clinical expertise.

One interesting area we’re looking at is how human tissue can be damaged by surgical tools – and how we can help prevent it. In laparoscopy, organs and tissues are manipulated by grasping them with plier-like tools. However, the tools are on long levers (the chopsticks) which pass through the abdominal wall and their mechanisms are affected by friction – factors which make it extremely difficult for the surgeon to ‘feel’ and regulate the forces that they apply to the tissue. This can result in tissue damage through excessive force, like getting a bruise but with potentially far more serious consequences for the patient. So we need to understand how the damage is caused; how much force is too much and how long a ‘grasp’ is too long? Our approach highlights the multidisciplinary nature of this work; using computer controlled lab equipment we grasp tissue specimens with precisely controlled forces. Then we relate this to clinical measures of tissue damage through histological analysis – looking at small sections of the tissue under a microscope and assessing how structures and cells have been deformed or destroyed. Using this information we’re working to develop improved tools that minimise tissue damage. The solutions have come from a range of different engineering fields; tribology: new bio-inspired materials with surfaces that reversibly adhere to tissue (think bio-velcro), mechanics: computational models of how tissues react to forces; robotics: tools that can actively and automatically regulate the forces they apply to prevent damage.

peter c.JPG

The other side of my work here involves teaching, something I’ve gradually moved into and really enjoy. We have an emphasis on linking our research with teaching here at Leeds. I think (hope!) this keeps things interesting and relevant for the students, it definitely does for me. I teach a 1st year computing course and the material could be quite abstract so it’s important to ground it with real-world examples – from controlling equipment at CERN to autonomously recording high scores on Guitar Hero, both important in their own way! One part I particularly enjoy is running projects for 3rd and 4th year students; it gives them a bite-sized taste of research and the opportunity to apply the engineering skills they’ve learnt without the normal constraints of lab-classes etc. This year I ran a project with my colleague Rob Hewson. Hatched over a strong coffee or two, we thought it might have been a touch ambitious…the idea was to investigate how palpation could be applied to laparoscopic surgery. It’s commonly used by clinicians (e.g. in breast examinations) to detect and assess lumps, which could potentially be cancerous, by feeling the tissue and its mechanical properties (tumours are typically much stiffer than healthy tissue). However, in laparoscopic surgery the surgeon cannot directly touch the tissue so an alternative approach is needed. The student team surpassed all our expectations, developing a proof-of-concept system that uses a computer model to simulate liver tissue (including a tumour) and then allows you to feel, and virtually palpate the tissue using a ‘haptic’ interface. They worked hard to achieve a lot in a short space of time and it was great to see this recognised when they were runners up in the Global NI student design competition, receiving some attention in the press which they took in their stride! We’ve now submitted the work for publication – certainly a tough act for this year’s students to follow!

liver.bmp

It’s the end of a long week; over the last few days we’ve run a conference on Oncological Engineering which has had some fascinating talks, I’ve started teaching our new intake of first year students and there’s been lots going on in our research projects. It’s a mix that constantly keeps me on my toes and reflects the challenge of working in modern day academia with its often competing demands. I’m not looking for sympathy though, it’s stimulating, rewarding and involves working with a great bunch of people, I wouldn’t have it any other way -already looking forward to next week!

Fractals: How nature just keeps on giving

jovan.bmp

This week’s guest blogger is Jovan Nedic, a PhD student in the Department of Aeronautics at Imperial College, London. His work looks at understanding how fractal geometries can be used to interact with fluids and how they can be implemented in engineering applications.

It is difficult in this day and age of scientific enlightenment to even think there is something in nature that can allow us to understand, and more importantly, exploit the world we have created for ourselves, but fractals are certainly one of those things.

I always love explaining what fractals are to others, mainly because I get to see the expression on their face change from a blank, vacant, almost bored expression, to a sudden realisation that fractals really are everywhere you look and that conceptually at least, they are relatively simple to understand. The tree is my favorite example, because it’s the easiest. Have a look at Figure 1 and you will see what looks like the various stages of a tree’s growth, or is it? Well it is, except in this example all you are really looking at is the different components of the final tree on the right which I arranged in a fractal manner.

Figure 1

tree.bmp

Now look at the Figure 2 below, courtesy of Per Ivar Somby. Can you spot the similarities? Put simply, a fractal is a self repeating pattern where if you kept zooming in, you would always see the same pattern. Mathematically, this means that you can go on for infinity, but practically there are obvious limitations.

Figure 2

2990547760_11594fbab9.jpg

The mathematics of self-similar shapes has existed for centuries, but what was rather surprising was realising that this is a natural phenomena, and trees are not the only example. River networks, clouds, coral reefs, leaves, lightning bolts, birds wings, broccoli, and the cardiovascular system are just a few examples that illustrate the abundance of this fractal pattern in nature. So there must be a reason as to why this is a naturally occurring phenomena and more importantly, could we exploit this in some way?

This is what myself and the rest of the researchers in the Turbulence, Mixing and Flow Control Group in the Department of Aeronautics at Imperial College London, under the supervision of Professor Vassilicos, are doing. Our aim is to not only understand how they might affect fluids, but to also apply this knowledge to real life applications.

One of the first things, and probably the most fundamental aspects of fractals to understand, is that fractals allow you to maximize the volume or space you are given. For plants this is vital because they want to capture as much sunlight as possible so they can grow. For our lungs it allows us to take in as much oxygen as possible and distribute it to the rest of the body as quickly as possible. Where does this come in handy for engineering applications? In-line mixers are one example that has been looked at; finding the most efficient way of mixing two fluids is vital for chemical combustion.

None of this happened by chance, and for the last ten years or so, there has been a lot of work on understanding how fractal geometries might be used to control turbulence and flow in general. Some of the results coming from these studies, as well as ones that are currently being carried out, show that certain turbulent parameters are not as universal as people once thought. This is not to say that scientists have been wrong in the past; scientific conclusions are based on the data that is in front of them, and as these change, so do the conclusions. This is how science evolves.

As our understanding of this new type of flow grows through experimental and numerical work, we can begin to use it to our benefit. One of the recent studies carried out in our group, and one that I have been working on, is for acoustic manipulation, specifically on aircraft wing spoilers. The noise generated by the spoilers comes from the unsteady flow created by it, which give off a very low frequency booming noise that can travel for miles on end; as you can imagine, this is a problem for the aviation industry! There were some signs that by simply adding holes to these plates, you would shift the noise to higher frequencies, frequencies we can’t hear. Dogs might, but they won’t be on the plane so that’s not too much of a problem. What we noticed however, is that if you used a fractal pattern to created the holes, you would get better results and furthermore, you could control the noise level.

Currently my work has started to look at using fractal generated geometries on wings, again following in the footsteps of nature as birds’ wings have a fractal element to them. This is still a work in progress but initial signs look promising. These are just a few examples, but interest in this field is growing every year and is not just confined to the UK, so expect to see many more examples of how fractal geometries are being used to help the world around us.

Geography Is Social

nicola.bmp

This week’s guest blogger is Nicola Osborne, a Social Media Officer for EDINA, a JISC National Data Centre based at the University of Edinburgh. Nicola began her career investigating “Y2K” for BP but, whilst studying engineering, she became more interested in writing and running review websites. After graduating Nicola joined the Edinburgh University Library then moved on to SUNCAT where she worked with library catalogue data and trialled various social media tools to promote the service. She was appointed to her current innovative role in 2009. Nicola Osborne is a regular speaker on social media, an organiser for the annual Repository Fringe events and regularly blogs on social media events, possibilities and issues for higher and further education.

I have the rather unusual job title of Social Media Officer for EDINA and I’ve been asked to explain what I do, though I should probably start by saying that my work varies hugely from day to day and week to week depending on the projects I’m working with, the events that are coming up, and the new technologies that have appeared lately (right now Google+ and Klout are the hot topics).

The largest part of my role is to work with colleagues across EDINA projects and services to encourage and support the use of social media and related communications technologies. We run services including Digimap, an online mapping and spatial data service including Ordnance Survey data; and JISC MediaHub, a huge resource of images, video and sound; and SUNCAT, the UK Serials Union Catalogue. I help my colleagues think about how we can engage our user communities through social media – whether by including sharing elements or social media-like features in an interface or through sharing training materials on YouTube or providing updates and alerts via blogs and twitter. I also help to manage the in-house blogs platform and authored the EDINA Social Media Guidelines which we have had a fantastic response to since we released them under Creative Commons license in January 2011. Acting as a social media “evangelist” is not only my passion but an official part of my job description so a significant part of my role is speaking, writing and running training events where I enthusiastically share new possibilities and best practice for using social media in the education sector.

At the moment I am also part of the team running the JISC GECO – Geo Engagement and Community Outreach – project. We are working with 12 JISC-funded geo projects across the UK to make connections between the use of geo information and “non-traditional” users, which tends to mean those outside of geography, geosciences or earth sciences. We are using “geo” in a very broad way so we would mean anything geographic, geospatial, geo-referenced, or anything where location and/or physical context is important. We are trying to build new connections between those who have expertise, experience and resources to share, those that are interested in using geo, and those who bring new perspectives on geo and on how geo data or tools could be used.

The diversity of projects and disciplines that interact with geo is so broad that GECO is proving to be a constantly challenging and inspiring project. Those projects we are working with include several elearning projects: GeoSciTeach is creating a phone app for teachers leading science fieldwork; JISC G3 are developing approaches to teach geographic concepts to non geographers; and ELOGeo are developing an e-learning framework for materials on open data, open source and open standards around geospatial information.

Many of our projects are exploring existing data: geoCrimeData project is looking at the relationship between location and crime statistics; U.Geo is investigating material held in the UK Data Archive to identify those with location information and potential for use as georeferenced data; PELAGIOS is applying the extremely modern concept of Linked Open Data to expose, share and combine online resources about the very ancient world; IGIBS is working with researchers in the Dyfi Biosphere (a UNESCO designated area of outstanding diversity of environment, culture, language, etc.) to create a tool that combines research data with authoritative maps and allows that work to become more visible and sharable between researchers; and the Halogen 2 project team are enhancing their existing cross-disciplinary History, Archeology, Linguistics, Onomastics and Genetics database.

Screenshots.bmp

We are also working with the NatureLocator project who have created a curiously addictive phone app that lets you record the location and level of damage to horse chestnut trees which will help track the spread of the leaf minor moth; xEvents, who are creating a tool to build, share and map academic events; STEEV, a project to enable you to time travel through of historical energy efficiency data right through to future predictions. And, last but by no means least, the GEMMA project (which has a particularly fetching gerbil logo) is building a series of mapping applications and tools that can be combined and adapted so that any web user can make a map no matter how much or little that individual knows about mapping.

Steev.bmp

Most of these projects are reaching out far beyond traditional academic communities and all reach beyond traditional geo communities. Communicating with these audiences can be challenging, particularly for those used to working mainly with other specialists in their field. We are helping the projects find each other and related projects both within and outside of academia, and we are helping them to reach out with the broader community around their work. My main responsibility has been to help the project teams communicate what they are doing through their blogs. Some of the project teams include experienced bloggers but some of our researchers and developers are entirely new to sharing their work in such an informal and public space and we’ve been delighted to support them to become confident and talented bloggers.

In addition to the project blog we directly communicate key developments and announcements about these projects through a central JISC GECO blog and on Twitter as @jiscGECO. We are trying to highlight the connections between all manner of geo ideas, projects, concepts, tools and sites so we tend to share rather quirky finds in these spaces and always welcome comments, suggestions and guest posts. We will be running a number of events over the coming months and we will be amplifying these through liveblogging, tweeting and, where possible, videoing key content. We also have a mailing list to encourage broad discussion around geo – we do a lot of matchmaking between different people and communities and we want to help raise awareness of the ubiquity and importance that geo plays in everyday work and lives.

Our main focus at the moment is the organisation of an Open Source Geo and Health event (Twitter hashtag #GECOhealth ) which we will be running in Edinburgh on Tuesday 9th August. We have worked with the ELOGeo project at Nottingham University, the British Computing Society, Edinburgh Napier University and geo enthusiasts at the Edinburgh College of Art to create a fantastic programme exploring the intersections between geo and many aspects of health practice, theory, trends, policy, and we hope this will trigger some really interesting discussions and relationships that will continue long after the project comes to an end.

The multitasking mind

Cross-posted with permission of OUPblog.

dario.jpg

This week’s guest blogger is Dario Salvucci, a professor of computer science and psychology at Drexel University, and author with Niels Taatgen of The Multitasking Mind. Dr. Salvucci has written extensively in the areas of cognitive science, human factors, and human-computer interaction, and has received several honors including a National Science Foundation CAREER Award.

If the mind is a society, as philosopher-scientist Marvin Minsky has argued, then multitasking has become its persona non grata.

In polite company, mere mention of “multitasking” can evoke a disparaging frown and a wagging finger. We shouldn’t multitask, they say – our brains can’t handle multiple tasks, and multitasking drains us of cognitive resources and makes us unable to focus on the critical tasks around us. Multitasking makes us, in a word, stupid.

Unfortunately, this view of multitasking is misguided and undermines a deeper understanding of multitasking’s role in our daily lives and the challenges that it presents.

The latest scientific work suggests that our brains are indeed built to efficiently process multiple tasks. According to our own theory of multitasking called threaded cognition, our brains rapidly interleave small cognitive steps for different tasks – so rapidly (up to 20 times per second) that, for many everyday situations, the resulting task behaviors look simultaneous. (Computers similarly interleave small steps of processing to achieve multitasking between applications, like displaying a new web page while a video plays in the background.) In fact, under certain conditions, people can even exhibit almost perfect time-sharing – doing two tasks concurrently with little to no performance degradation for either task.

mind.jpg

The brain’s ability to multitask is readily apparent when watching a short-order cook, a symphony conductor, or a stay-at-home mom in action. But our brains also multitask in much subtler ways: listening to others while forming our own thoughts, walking around town while avoiding obstacles and window-shopping, thinking about the day while washing dishes, singing while showering, and so on.

Multitasking is not only pervasive in our daily activities, it actually enables activities that would otherwise be impossible with a monotasking brain. For example, a driver must steer the vehicle, keep track of nearby vehicles, make decisions about when to turn or change lanes, and plan the best route given current traffic patterns. Driving is only possible because our brains can efficiently interleave these tasks. (Imagine the futility of only being able to steer, or plan a route.)

So how has multitasking earned such a negative reputation? In large part, this reputation stems from unrealistic expectations. The brain’s multitasking abilities – like all our abilities – come with limitations: when performing one task, the addition of another task generally interferes with the first task. For many everyday tasks, the interference is negligible or unimportant: your singing may affect your showering, or thinking about your day may affect your dish-washing, but likely not so much that you notice or care.

Other tasks, though, require every ounce of attention and can push past the limits of our multitasking abilities. In driving, the essential subtasks are demanding enough; additional subtasks – texting, dialing, even talking on a phone – increase these demands, and when controlling a 3000-pound vehicle at 65 miles per hour, even these minimal additional demands may lead to unacceptable risks.

Still other tasks do not have safety implications per se, yet most would consider them important enough that multitasking in those contexts is undesirable. A student in class is already multitasking in listening to the teacher, processing ideas, and taking notes. If this student is checking Facebook at the same time, this extra subtask drains mental effort away from the more critical subtasks and dilutes the learning experience.

The problem with multitasking thus lies not in our brain’s inability to multitask efficiently, but in our own priorities and decision-making. When we choose to multitask, we are deciding – consciously or not – to accept degraded performance on one or more of tasks involved. And when we still choose to multitask when it is undesirable (as in the classroom) or unacceptable (as in driving), we should hold ourselves accountable for these decisions. So if you walk into a pole or wreck your car while texting, don’t blame your brain; blame yourself.

How does learning to read affect our brains?

untitled.bmp

Sophie Scott is the group leader of the Speech Communication Group at the Institute of Cognitive Neuroscience at University College London (UCL) (UK). She was awarded a Ph.D. at UCL in the acoustic basis of rhythm in speech and then spent several years as a postdoctoral researcher at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge (UK). She currently holds a Wellcome Trust Senior Fellowship and has been funded by the Wellcome Trust since 2001.

Her research uses functional imaging to investigate the cortical basis of human speech perception and production, applying models from primate auditory processing to the neural basis of human perception. She is particularly interested in the different kinds of information conveyed when we speak and how the acoustic information in our voices can be processed in different ways in the brain.

We learn to read in a very different way from learning to speak. Speech is embedded in our social interactions from the minute we are born and even before birth we can hear our mother’s voice in utero_. These prelingual twins=JmA2ClUvUY show how you can understand verbal interactions, before you even have words at your disposal.

Learning to read, in contrast, is something that we largely learn to do when we are at school, where we are specifically instructed how to do it. There are different writing systems:

• Logographic (like Chinese) where a written word or a meaningful part of a word is represented by a single written element (though that symbol may contain phonetic and semantic information).

• Syllabaries: (e.g. Cherokee) where a written element conveys a whole syllable.

• Alphabetic: (e.g. English) where a single written element roughly represents a single speech sound.

Each of these systems has their own unique advantages and disadvantages. Notably, alphabetic writing systems can differ widely in how easy they are to acquire. Children learning to read English, which is highly irregular in both spelling and pronunciation, do less well at reading non-words after a year of reading, in comparison to children learning to read Spanish or Finnish (Aro and Wimmer 2003) English-reading children only catch up on their Finnish peers in grade 4.

In addition to the undoubted many values of literacy, we can see the impact of learning to read in a variety of ways. For example, it is harder to name the colour of the

ink in the word green than in the word grown. This Stroop effect is commonly used to demonstrate how meaning can interfere with cognitive processes – if you are naming ink colours as fast as possible, competing colour names will slow you down. Importantly, this can only occur because once you are a skilled reader, you can’t ‘switch off’ your reading when trying to name the ink colours, which is how the competing semantic information can get into the system. As skilled readers, it is nearly impossible for us not to read words.

The skill of learning to read also forces us to engage with sounds in ways that differ from what we have to do to understand spoken language. Some abilities in the manipulation of speech sounds are present before we learn to read (e.g. being able to tell that two words rhyme), while others emerge as a consequence of our learning to read. Thus, segmental skills – being able to break a word down into separate chunks corresponding to individual speech sounds – are something that we acquire when we learn to read. People who have never read find it hard to split ‘cat’ into ‘c’ ‘a’ and ‘t’ (though not completely impossible).

cat.PNG

Maybe because of the skills we acquire when we learn to read, psychologists and cognitive neuroscientists often use these segmental skills as an index of speech perception ability, despite the fact that people who haven’t learnt to read and who therefore find such tasks hard, can understand speech perfectly well. Because we can break spoken words down into smaller chunks, it is often assumed that this must be a central aspect of speech perception.

This bias towards segments in speech may have had other, more central effects on how we construe the problems of speech perception. It has been suggested, for example, that reading with an alphabetic system has biased us into the belief that the smallest units of speech, phonemes, are perceptual realities in terms of how we process speech from sound to meaning (Boucher VJ 1994 Alphabet-Related Biases in Psycholinguistic Inquiries – Considerations for Direct Theories of Speech Production and Perception, Journal of Phonetics, 22.1: 1 -18). The argument goes that because we can segment speech into phonetic elements (a skill we acquire when we learn to read) and because we are immersed in a reading system which represents spoken words as sequences of alphabetic symbols, that we implicitly assume speech to have these characteristics.

This assumption has had scientific consequences. For a long time, theories and models of spoken word comprehension incorporated a phonetic level of representation (e.g. the TRACE model.) The problem with phonemes is that any one speech sound will be greatly altered by where it is in a word and the sounds around it – in English the sound ‘l’ is very different at the start of the word ‘leaf’ than at the end of the word ‘bell’. There are also co-articulation effects, which refer the ways that the same speech sound is affected by its neighbours: the ‘l’ at the start of ‘let’ differs acoustically from the ‘l’ at the start of ‘led’ because of differences between final ‘t’ or ‘d’ phonemes. This covariance is highly useful to the listener and it makes sense that the perceptual system would preserve this detail. If you are building a computer system to understand speech, for example, you don’t build one to identify particular phonemes, you build it to look across sequences of sound, either groups of phonemes or whole words. Indeed, more recent psychological models of human speech perception explicitly do not make the assumption that phonemes need to be identified prior to comprehension (e.g. Shortlist B, Norris and McQueen 2008.)

At a brain level, can we see any sensitivity in speech perception areas to phonemes, as opposed to sequences of speech sounds? We recently investigated the neural activity seen when people silently rehearse pseudo-words. We varied how long the pseudo-words were in syllables (e.g. sapeth vs sapethetis) and how phonetically complex they were (e.g. sapeth vs stapreth.) This enabled us to separately identify brain areas which are more activated when people try to maintain longer or shorter pseudo-words, from those that are more activated when there are phonetically complex sequences in the material we are rehearsing. Silent rehearsal recruits both auditory and motor brain systems, both of these systems were sensitive to the length of the pseudo-words. In contrast, only the motor output systems were sensitive to the phonetic complexity of the pseudo-words, being more active when phonetically more complex sequences were rehearsed. This finding suggests that auditory areas are less sensitive to specific phonetic details, unlike motor systems. In turn, this may mean that if phonemes are ‘real’ phenomena in the language system, they are implemented in the motor systems, not in perception systems. In other words, we may not need to extract phonemes to understated speech, but they may be important elements in speech production.

My son is currently learning to read and write and watching his delight at solving the ‘problem’ of the sounds in words and what rhymes with what, is a joy and a privilege to see. Overhearing his dad explaining why ‘bird’ contains an ‘r’ letter (short answer, he had a heroic go at pronouncing it as ‘biRRRd’ as if he was from the Scottish highlands) showed me both the problems that written English presents to someone learning it, as well as the dominance reading can cast over what sounds we think there are in words.

Why We Need A New Economics

david.JPGThis week’s guest blogger is David Orrell, an author, founder of Systems Forecasting, and an Honorary Visiting Research Scholar at the Smith School of Enterprise and the Environment in Oxford. The UK Kindle edition of Economyths is available for a limited time at the highly economical price of 99p.

In January 2009, in the immediate aftermath of the credit crunch, the physicist and hedge fund manager J.P. Bouchaud wrote in the pages of Nature that Economics needs a scientific revolution.

Economyths is an attempt to spell out what such a revolution might look like, and document the exciting developments taking place in economics.

It too is written from an outsider perspective – that of an applied mathematician, working mostly in the area of computational biology. Many of the techniques used in that field, such as network theory and agent-based modelling, are beginning to find widespread applications in economics. But the assumptions they are based on are completely different from those of mainstream economics.

Consider for example the idea that the “invisible hand” of the marketplace drives prices to an optimal equilibrium. This idea is usually attributed to Adam Smith, though as the Czech economist Tomas Sedlacek argues it actually goes back much further.

In the 19th century, neoclassical economists such as William Stanley Jevons and Léon Walras attempted to demonstrate this principle mathematically, based on the idea of Homo economicus, or rational economic man. In the 1950s, economists finally managed to prove that markets would indeed reach a Pareto-optimal equilibrium. But to do so, they had to make numerous assumptions – including rational utility-maximising behaviour, coupled with perfect information, and infinite computational capacity.

Economyths front cover.JPG

In the 1960s, efficient market theory was proposed as an explanation for why the economy was impossible to predict. Again, it assumed that market participants were rational and acted independently of one another to optimise their own utility. In the 1970s, rational expectations theory was all the vogue. Tools in use today – such as the risk models relied on by banks, or the General Equilibrium Models called on by policy makers – continue to make these assumptions, with at best small modifications.

Of course, no one thinks that people are perfectly rational or independent, or that the economy reaches a perfect equilibrium – but it has been generally believed that these assumptions were good enough to capture the overall behaviour. They could be viewed as representing an ideal economy, to which the actual economy can at least aspire. And they provided policy makers with an excuse for dangerous deregulation of the financial sector – what Adair Turner has called “regulatory capture through the intellectual zeitgeist.”

Unfortunately, as illustrated most recently by the credit crunch, this picture of the economy is highly unrealistic. The behaviour of home owners during the US credit crunch – or for that matter large firms like Lehman Brothers – hardly conforms to the model of rational economic man. And if stock markets are really governed by the invisible hand, then it has a bad case of the shakes.

So-called heterodox economists have long questioned the assumptions behind mainstream economics. But following the credit crunch, there has been an even more concerted effort to develop alternative models which can address issues such as economic inequality, environmental sustainability, human wellbeing, and financial instability. Many of the new ideas are coming from areas of applied mathematics such as nonlinear dynamics, complexity, and network theory.

An example is the agent-based models being used by complexity researchers such as Doyne Farmer to simulate the economy. Models have been developed of artificial stockmarkets in which hundreds of simulated traders buy and sell stocks. Each of the trader “agents” has its own strategy, which adapts in response to both market conditions and the influence of other agents. Instead of settling on a stable equilibrium, it is found that prices experience periodic booms or busts as investors flock in and out of the market. Agent-based models are also used to simulate the highly skewed distribution of wealth in many economies, in which a small percentage of the population sequesters most of the wealth.

Another rich source of new ideas is those other life sciences, biology and ecology. The ecologist Robert May recently joined forces with the Bank of England’s Andrew Haldane to analyse the financial network from a systems perspective. They found that risk metrics used for individual institutions such as banks fail to account for systemic risk.

The financial system has become increasingly interconnected in recent decades. This is good for short-term efficiency, but also means there is an increased risk of contagion from one area to another, which does not register with conventional risk models. As ecologists know, robust ecosystems tend to be built up of smaller, weakly connected sub-networks. Maybe financial regulators can learn a trick from nature, by introducing a degree of modularity and redundancy. An even more urgent issue, of course, is how to make the human economy fit in with the global ecosystem which contains it.

One of the lessons of complex systems research is that it requires collaborations between people from a broad mix of backgrounds. Another is that models are only imperfect approximations of a system. Accurate prediction will always remain elusive. However we can at least base our models on realistic assumptions. And even if we cannot predict the exact timing of the next financial crisis any better than we could the last one, at least we can learn how to make the system more robust in the first place.

Science in the Arab world

rana.bmp Dr. Rana Dajani teaches molecular biology and is the Director of the Center for Studies at the Hashemite University of Jordan. She is also the founder of the initiative We Love Reading, which aims to encourage children in the Arab world to read for pleasure. Dr Rana Dajani, who took part in the Belief in Dialogue conference on 21-23 June, blogs about what’s needed for science to flourish.

The conference was organised by the British Council in partnership with the American University of Sharjah and in association with the International Society of Science and Religion.

As a scientist in the Arab world, I practise science and research everyday. The challenges are multiple and in many cases not so obvious for those in the West, who can afford to take these things for granted. The most important element for fostering research is creating an environment to encourage, support and sustain it.

Firstly, such an environment can only be created if you put in the work and deal with the problems as they arise. It’s not something that you can just dream up while sitting at your desk. Secondly, to make it sustainable, management needs to be accountable for its actions. Unfortunately, this is not always the case here. Without these two elements, no money in the world will allow science to progress and develop.

There is an abundance of minds and creativity in the Arab world. However, most of them drain into the West because there is a well-established support system for research.

So, what is the solution? The solution is freedom; freedom of opinion, being able to come to a decision through questioning, unhindered contemplation, institutional accountability, democracy and human rights.

Freedom will ultimately lead to progress and development not only in science but in all aspects of life in the Arab world. Freedom of opinion starts at home, with children given the opportunity and encouragement to question, challenge and form their own opinions.

This should further be fostered in schools, where teachers encourage students to ask questions. If teachers don’t have the answers, they should say so honestly and without covering up gaps in their knowledge by stifling the student. Children can learn to form their own opinions if they are taught reasoning and deduction and are granted the space to practise those skills. That is what our children need and that is what is missing in the Arab world.

University students have not been able to form independent opinions reflecting their original thinking. The day my students wrote essays expressing themselves was the day they felt human. One student told me that he was finally Someone – with a capital S.

The day I listened to a student explain her opinion was the day she could give me a big smile and tell me it was the first time she felt respected. It is such individuals who build our communities and nations, who will make a difference, who will take us into the twenty-first century with confidence.

How do we achieve this goal?

I believe the only effective way is to instil a love of reading in our young ones, so that they can learn from other people’s experiences across time and space and see and respect other ways, other narratives, that are equally justified. I have developed a programme called We Love Reading to do that throughout the Arab world by training women to read aloud to children in their neighbourhoods.

Follow Belief in Dialogue on Twitter

Facts and figures – treat with caution

David.bmp

Dr David Barlow is Consultant in Genitourinary Medicine at St Thomas’ and Guy’s Hospitals, London. He has been the lead author for the chapter on gonorrhoea in the last three editions of the Oxford Textbook of Medicine. Between 1986 and 1993, at St Thomas’, he ran the largest linked HIV sero-survey in the United Kingdom. The third edition of his book Sexually Transmitted Infections- The Facts, Oxford University Press, with original cartoons by the late Geoffrey Dickinson, was published in March 2011.

There is something slightly uncomfortable about authoring a book whose cover proclaims: “XXX – The Facts”, with a sub-heading “All the information you need, straight from the experts”. Such is the house style of the OUP for its medical ‘Facts’ series, currently some 35 strong, but going forth and multiplying as you read.

Anyway, it got me thinking about how, in my specialty, when ‘facts’ become ‘figures’, caution is called for. I had an interest in heterosexual transmission of HIV in the 1980s and 1990s which put me in conflict with the official number-crunchers and I’m afraid I’m still suspicious when presented with totals. At the final proof stage of my ‘Facts’ book, I checked the Health Protection Agency’s website for the numbers of UK STIs reported for 2008. Unmentionable diseases including syphilis, gonorrhoea, warts and herpes remained as I had written. Total chlamydial infections, however, had changed from 126,882 (accessed July 2010) to 217,570 (accessed January 2011). A small adjustment might be reasonable. But 70%? This was a DB Type 4 numerical error.

DB’s numerical errors: Types 1-5

Type 1 Somebody has a vested interest: “If we tell these clap-doctors that laboratory culture of the gonococcus is only 70% sensitive, they’ll shut their lab’ and buy our ‘totally sensitive’ NAAT.”

Type 2 The totals may be correct but are misleading (1): “It is Government/Department (of Health) policy to pretend that there is a rapidly increasing HIV epidemic in heterosexuals who are transmitting within the UK.”

Type 3 The totals may be correct but are misleading (2): There is a genuine, probably innocent, misinterpretation of the figures (see horseradish sauce, below)

Type 4 The totals may be correct but are misleading (3): The explanation is perfectly reasonable and logical, but the calculation is opaque/we are keeping it to ourselves/forgot to tell you/have you read the small print?

Type 5 The totals are incorrect: Woops!

At the beginning of June, I awoke to BBC headlines about a doubling of UK-acquired HIV between 2001 and 2010. This drew me to the HPA’s website where I found a press release (June 6): ‘Last year there were 3,800 people diagnosed with HIV who acquired the infection in the UK, not aboard [sic], and this number has doubled over the past decade.’ From the same site: ‘… HIV diagnoses among heterosexuals who most likely acquired in the UK have risen in recent years from 210 in 1999 to 1,150 in 2010’. I shall return to these later but if you really have nothing better to do, why not see whether you can confirm the figures quoted above by accessing the HPA’s ‘New HIV Diagnosis,’ Table 5 here. And your next task (5 marks) is to re-word the press release…

Exactly thirty years ago, on 5th June 1981, the sleuths at the Centers for Disease Control published their crafty bit of epidemiology entitled ‘Pneumocystis pneumonia – Los Angeles’. The CDC had picked up an increase, from the West Coast, in requests for pentamidine. This was the drug used to treat PCP, a rare lung infection found in renal transplant patients whose immunity had been weakened (deliberately) to reduce rejection.

These new cases were different. The men were immuno-compromised but none were undergoing transplantation and all were gay. Thus were HIV and AIDS (although not so named for a year or two) introduced to an awe-struck, and soon fear-stricken, public.

Britain had its first AIDS case in 1981 and in August 1982 the Communicable Disease Surveillance Centre (the UK’s CDC) published the first of their monthly updates in the Communicable Disease Report, the CDR. The risk categories were divided into homosexual, haemophiliac, blood transfusion, intravenous drug users and heterosexuals [without other risk]. It was with these heterosexual cases that the distinction between ‘the truth’ and ‘the whole truth’ became lost during late 1986.

The May 1986 CDR tables broke down the heterosexual AIDS cases into: 3 with USA/Caribbean connection, 3 simply ‘heterosexual contact’ (of whom two “…had recently returned from Uganda and Mozambique.”), and 12 associated with sub-Saharan Africa. In October this connection became a footnote: “associated with sub-Saharan Africa” and by November, the categories had become: ‘contact UK’ and ‘contact abroad’. The December, separate, HIV figures were reported, without footnote, simply as ‘heterosexuals’ ( Type 2 numerical error ). Africa had disappeared from the tables.

By one of those coincidences loved by cynics and conspiracy theorists, the UK-wide leaflet drop about AIDS occurred in January 1987, the very next month, to be followed, in February, by the ’_Don’t die of ignorance_’ campaign. The national press then published increasingly doom-laden descriptions, largely unchallenged, of the burgeoning UK AIDS epidemic in heterosexuals.

What actually mattered was the number of cases being transmitted in Great Britain. Was the disease spreading? What was the risk from a bonk?

The change in wording of the heterosexual categories in the late 1980s allowed speculation that the ‘infected abroad’ category was largely made up of British nationals who had gone overseas and returned with HIV/AIDS. This was the CDR’s interpretation when they gave advice to travellers in 1991 ( Type 3 numerical error ).

We published an alternative view in the Lancet (CDR did not print correspondence, commentary or criticism) and the CDSC, unusually given the chance to reply in the same edition, graciously and politely acknowledged our figures from St Thomas’ but said that they were not representative. Neither my first nor last experience as an outlier.

Have you ever made horseradish sauce? Epidemiologists and cookery-writers run similar risks. Counting and cooking need to be in their respective repertoires but, for both, the craft improves with hands-on experience: contact with patients, or trying out the recipe. If your cookbook doesn’t mention wearing goggles with the wind behind you while you grate this vicious root (and most don’t), the author has never made the sauce. Epidemiologists don’t need the formula for horseradish peroxidase either, but they may miss an open goal if they don’t see patients.

Four other hospitals in or near London (I confess to prompting) reported that most of their (no other risk) HIV-positive heterosexuals were, like ours, from Africa, (Outliers 5, Regression Lines 0). It was not until later in the 1990s that the CDSC accepted the UK heterosexual HIV/AIDS epidemic to be largely imported, with little evidence of significant transmission between heterosexuals from, or in, this country.

So, how did you get on with the HPA’s table 5? You found the 210 for 1999 easily enough, I’m sure. But the 1,150 (and 3,800) for 2010? Well, a helpful person in the HPA’s epidemiology section told me they reached this figure by extrapolating the, as yet, uncategorized (‘not reported’ – penultimate row Table 5) cases in the same proportion as the different categories where the region of infection was actually known ( Type 4 ).

“But you didn’t apply that correction to the 210 in 1999”.

“Ah, no. We didn’t!” ( Type 5 ).

And, finally, the Type 1 numerical error? Specificity is also important in diagnostic tests (the 55 year-old Granny who went to her GP for a smear test, was screened for chlamydia, and came out with gonorrhoea. Yes truly!). The Nucleic Acid Amplification Tests for gonorrhoea may give you false positives.

Why didn’t you tell me about this before, Mother?

box.bmp

So, am I advising less sex?

What, and put myself out of a job? Give over!

References

Barlow D (2004) HIV/AIDS in ethnic minorities in the United Kingdom.

In Ethnicity and HIV: prevention and care in Europe and the USA, Eds, Erwin, Smith and Peters. 21-46

Barlow D, Daker-White G and Band B (1997) Assortative mixing in a heterosexual clinic population – a limiting factor in HIV spread? AIDS; 11:1039-44

Belief in Dialogue: Science, Culture and Modernity

fern.JPG Dr Fern Elsdon-Baker is Director of the British Council’s Belief in Dialogue Programme. Belief in Dialogue is a new intercultural programme, which explores how people in the UK and internationally can live peacefully with diversity and difference in an increasingly pluralistic world. Fern currently serves on the UK Arts and Humanities Research Council’s Science in Culture Advisory Group. A passionate believer in the interactive communication of science, history and philosophy, in her spare time she is the recorder for the History of Science section for the British Science Association. She also serves on the programme’s committee for the British Society for the History of Science.

This blog post is coming to you from the United Arab Emirates. I am at a British Council conference organised with the American University of Sharjah, in association with the International Society for Science and Religion. The title of the conference, Belief in Dialogue: Science, Culture and Modernity, may at first seem a little challenging to some regular readers of Nature. How can there possibly be a dialogue of this kind?

One of the key questions we will be asking at the conference is what factors need to be in place in any society or culture for scientific endeavour or inquiry to flourish. At first these might seem like quite simplistic questions – surely it’s just about good science education and funding for scientific research institutions? However, I would argue it takes much more than that to build a thriving scientific economy. There are certain building blocks needed in areas of society that we might not readily recognise.

The role technological and medical advances can play in our daily lives is clear. We are all aware where the ethical boundaries may lie, whether this be around a range of questions from stem cell research, reproductive technologies, climate change through to water security.

However, to get to the root of what makes science flourish we need to make one fundamental observation – what we mean when we use the terms ‘science’, ‘technology’ or ‘medicine’ are all different. Intrinsically intertwined with shared – yet in places divergent – historical contexts, they have different approaches to methodology or their philosophical underpinning.

For technology to flourish you do not necessarily need a flourishing ‘scientific’ culture – significant societal drivers such as industry and entrepreneurship play perhaps a bigger role than a purely ‘scientific’ approach. Scientific inquiry is as much a way of thinking, seeing and asking questions about the world around us, as it is a consensus on a type of agreed methodological approach.

‘Science’ in this way, whether we recognise it or not, is an integral part of our daily lives. It is the very fabric of our cultural context but in different ways. I am far from arguing that there is no hope of an ‘objective science’ in the way that many scientists would argue – I am certainly not suggesting that the very stuff of science is culturally relative. But the cradle of all scientific inquiry is the broader societal and cultural context in which it sits. Not just the cultural perspective of the individual or team of researchers, but the context of the political system which supports or suppresses, the funding stream that can inadvertently create fashions and trends, and those of us in wider society who are ultimately the end users of any research and in turn fuel both political and funding priorities. This rich tapestry of influences ultimately shapes the scientific discourse of the day.

The answer then to my question lies outside of the science faculty or classroom. It is becoming increasingly recognised in developing scientific economies that the humanities play a key part in helping to frame the systems of thinking that are needed to engage both critically and analytically with the world around us. In the UK, we have long recognised the role of strong multidisciplinary discourse and it is to our credit that our research funding councils see the critical value in this interplay between sciences and humanities – even in these difficult economic times.

Another factor that we are growing to value more and more is the open engagement with wider society and cultures in science communication. Gone are the days when we would expect to disseminate ‘knowledge’ to an uninformed and apparently wilfully ignorant public. We are all members of that amorphous mass we like to call public and we cannot assume that we are all uninformed, uninterested or do not have valid questions about the role of science in society today or how it relates to our own individual cultural perspectives.

Freedom of thought and expression play a key role here too. Too often fundamentalists at the extremes of the spectrum close down on other’s perspectives not because of any epistemological impasse, but merely due to an unwillingness to even engage with another’s cultural perspective. Too often when we communicate science we cleave to polarising narratives that create an ‘us’ and ‘them’ approach to science communication – which can exclude a large proportion of the world’s population. There is no ‘them’, there is only an ‘us’.

In an increasingly globalised world where we all have multiple identities it is not possible to delineate between communities or cultures in the simplistic ways of the past. We cannot therefore assume, as has been done in previous years, that it is possible to create divisions between any culture – be that a disciplinary cultural divide between science and humanities or a cultural divide between world views.

In my work I have had the opportunity to meet a number of people from many different cultures, communities and faiths. Sometimes my perspective on how we view the world might differ from those I meet, but I have as yet not had the misfortune to meet someone who is so set in his own world view that we cannot openly engage in a discussion about those differences. In some surprising and heart-warming circumstances I have found considerable common ground with those who initially felt they were in opposition to my work communicating evolutionary science but have since become firm supporters. At other times I have come away with my own prejudices and misconceptions challenged and found a new respect or understanding of another’s world view even if it is one I do not wholly share.

What I hope we will see at the conference at the American University of Sharjah is an opportunity to openly share different perspectives on the issues and challenges at the core of scientific discourse that are fundamental to all societies’ growth. But more importantly I would hope that by bringing people together from different countries with different beliefs and world views, we will each take our part of the jigsaw and place it together – so that in the future we can build a clearer global picture of how to communicate science in a more effective way as we face the many challenges ahead of us all in the 21st century.

To join in the discussion on Twitter, the conference hashtag is #BIDSCM and you can find the official Belief in Dialogue Twitter account here.