Tuesday, November 26, 2024
12.9 C
New York
Home Blog Page 29

The Powers of Proton Therapy

Researchers from the Perelman School of Medicine at the University of Pennsylvania have completed an analysis that has proven the powers of proton therapy. In the study, they were able to significantly lower the risk of side effects in cancer patients undergoing proton therapy as opposed to traditional radiation methods. While overall cure rates remained identical, those undergoing the proton therapy had, on average, less unplanned hospitalizations. Overall, the researchers found this new wave of therapy reduced the risk of those side effects by around two thirds.


There are a few key differences when it comes to proton therapy and traditional radiation. The main one being that traditional photon radiation tends to use several x-ray beams in which to blast the tumor with radiation. The problem with this method is that radiation is unavoidably deposited in the healthy surrounding tissue, increasing the risk of further damage. Proton therapy works slightly different as its treatment involves pushing positively charged protons at the tumor. This enables the radiation to get much closer to the target with hardly any being delivered beyond this. 

As part of the study, researchers studied various side effects including pain, nausea, diarrhea, or difficulties in breathing or swallowing. However, they only focused on side effects classed as grade-three or higher, meaning those severe enough to mean hospitalization. Out of nearly 1,483 cancer patients receiving both radiation and chemotherapy, 391 were given the proton therapy whereas the remainder received photon radiation.


The main point behind the study was to determine whether or not patients experienced any adverse side effects of a grade-three level or higher, within 90 days of receiving treatment. Results from the study showed that 27.6 percent of patients receiving the traditional photon treatment did, compared to just 11.5 percent of those receiving the new proton therapy.   

“We know from our clinical experience that proton therapy can have this benefit, but even we did not expect the effect to be this sizeable,” commented James Metz, MD, leader of the Roberts Proton Therapy Center at Penn, chair of Radiation Oncology, and senior author of the study. And the good thing about proton therapy is that the improvement inexperienced side effects don’t come at the cost of reduced effectiveness, therefore it could potentially improve survival outcomes.  

Proton therapy is FDA-approved and showing great promise to become a highly effective and widely used cancer treatment.  

ZN Batteries get a Boost in Rechargeability

When it comes to batteries, efficiency is key. And with a big focus on staying green, experts have been long on the look out for a battery that is rechargeable, safe, and green, and can take the place of the volatile, not so efficient, lithium-ion battery. And with the chemistry of the solid-electrolyte interphase (SEI) being an essential governing factor in the cycling life of rechargeable batteries, they have become one of the key focus points for research.  


Zn batteries, or ZBs for short, may be just that. Not only are they low cost, but they also kick out some serious volumetric energy, which makes them a prime candidate be the next big battery. The only problem is that some characteristics of the Zn-electrolyte interface restrict the rechargeable ZBs development and use. To try and solve this issue Prof. CUI Guanglei and colleagues from the Qingdao Institute of Bioenergy and Bioprocess Technology of the Chinese Academy of Sciences took to creating new concepts involving existing SEIs as a way of modulating the electrochemical properties of Zn.

In situ formed and artificial protective interphases to tame Zn electrochemistry. CREDIT ZHAO Jingwen, ZHAO Zhiming and QIU Huayu

During their research, the group observed (for the first time ever) fluoride-rich SEI located on a Zn anode. “The protective interphase enables reversible and dendrite-free Zn plating/stripping even at high area capacities,” said Prof. CUI. “This is due to the fast ion migration coupled with high mechanical strength.” The Zn batteries demonstrated great stability and an excellent efficiency level at both high and low rates being designed in such a way. 

The researchers also found that coating the Zn surface with a protective polyamide layer added to its performance levels, even at a high depth of discharge. 

Protein Production Supercharged

Proteins are notoriously difficult to synthesize in the lab. But they are essential for developing drugs such as insulin or clotting factors. Researchers at Washington University School of Medicine in St. Louis have found a way to change the sequence of the amino acids which make up proteins. And, as a result, it has sped up the laborious and costly process used today, tenfold. 


“The process of producing proteins for medical or commercial applications can be complex, expensive and time-consuming,” commented senior author of the study Sergei Djuranovic, Ph.D., an associate professor of cell biology and physiology. “If you can make each bacterium produce 10 times as much protein, you only need one-tenth the volume of bacteria to get the job done, which would cut costs tremendously.” Another good thing about the process is that it will work with any kind of protein. 

Tubes of green fluorescent protein glow more brightly when they contain more of the protein. Researchers at Washington University School of Medicine have found a way to increase protein production up to a thousandfold, a discovery that could aid the production of proteins used in the medical, food, agriculture, chemical, and other industries. CREDIT Sergej Djuranovic

To begin with, the researchers changed the sequence of just the first few amino acids which make up protein. As this was the start of the experiment, they didn’t really think it would have that much effect. It turns out, they were wrong. The changing of the sequence actually increased protein expression by around 300%! That’s when Djuranovic and colleagues started looking at why. Using green fluorescent protein, a biomedical research tool to estimate how much protein was in a sample, the researchers changed the sequence in the first few amino acids of the protein, generating more than 9,000 distinct versions, all identical apart from at the beginning. 


There were many different versions of the green fluorescent protein ranging from very dim to super bright. And using careful analysis techniques, the researchers, alongside others from Washington University and Stanford University, managed to identify combinations of the amino acids at positions three, four, and five of the amino chain which was responsible for the spike in protein expression. The results of the study, which are published in Dec. 18 issue of Nature Communications, could help increase protein production not just for the medical industry, but also for other industries including food, chemical, and agriculture. 

Capturing the World’s Carbon Offset

We all know how important it is to cut the carbon footprint created by humans. And while we’ve demonstrated that we can do this well, the problem isn’t in capturing the carbon, but rather in storing it. Carbon capture and storage, or CCS for short, is essential for helping to cut the world’s carbon dioxide emissions. At the moment, there are less than two dozen CCS projects scattered around the world. While this is partly down to the costs involved, it’s also largely down to the uncertainty of the technology’s viability.    


A new study, published recently in Nature Scientific Reports, demonstrates how it’s possible to insert enough carbon dioxide injection wells across the globe to meet the emission cut goals set out by the Intergovernmental Panel on Climate Change (IPCC). “The great thing about this study is that we have inverted the decarbonization challenge by working out how many wells are needed to achieve emission cuts under the 2-degree (Celsius) scenario,” said Philip Ringrose, professor at the Norwegian University of Science and Technology (NTNU), a geoscientist at the Equinor Research Center located in Trondheim, and lead author of the study. It works out to around 2,000 wells per region, or 12,000 in total across the world, which is a fraction of that of the petroleum industry.

This visualization illustrates how CO2 is injected into a subsea geologic formation at the Sleipner field. Equinor began injecting CO2 into the formation in 1996. More than 20 million tonnes of CO2 have been injected into the formation since then. This is the equivalent to the annual emissions from 10 million cars.
CREDIT Illustration: Equinor

To get an idea of how much space there is potentially to store carbon dioxide, Ringrose and his co-author, Tip Meckel from the University of Texas Bureau of Economic Geology, first looked at worldwide continental shelves. Previous studies of this nature mainly focused on the estimated volumes in different rock formations on the continental shelf.

However, both Ringrose and Meckel agree that there’s a better chance of finding carbon dioxide storage areas if those rock formations that can handle intense pressure are looked at first. The reason for doing this is because the injection of carbon dioxide into a rock formation will increase its pressure within. If too much pressure is exerted for the rock to handle, cracks may appear deeming the project to be unsafe, needing to close. 

So, with that in mind, the researchers developed a way of classifying different storage formations by way of their ability to store carbon dioxide. Using this kind of system, Class A are those without any pressure limits and therefore the most preferred. Class B are those where carbon dioxide can be injected up to a certain limit, and Class C are those sites which would need to carefully managed. “We argue that this transition from early use of CO2 injection into aquifers without significant pressure limits (Class A), through to CO2 storage in pressure-limited aquifers (Class B) and eventually to pressure management at the basin scale (Class C), represents a global technology development strategy for storage which is analogous to the historic oil and gas production strategy,” wrote the researchers. As more experience is gained in the injection of CO2 into the rock formations, Class B and C areas will become more usable. 


As well as finding enough space to inject the CO2 experts also have to find a way of injecting it fast enough to suit the requirements as set out by the IPCC, which currently stands at around 6 to 7 gigatons per year until 2050. But, even with all 19 large-scale CCS projects in operation currently and the 4 being built, that still only accounts for around 36 million tonnes per year. Considering a gigaton is the equivalent of 1,000 million tonnes, something pretty drastic would need to be done to reach those targets. But, as impossible as it may sound, both the IPCC and researchers are confident it can be achieved. “With this paper, we provide an actionable, detailed pathway for CCS to meet the goals,” said Meckel. “This is a really big hammer that we can deploy right now to put a dent in our emissions profile.”      

Why Microsoft is Still the King of Software

Back in the day, Microsoft was the name to beat in the world of, not just business-related software, but software in general. Whether you were crunching numbers, preparing a pivotal presentation or simply writing your magnum opus, chances are you were doing it with a Microsoft product of some sort. Now, whilst you may still be using Microsoft software as your go-to tool to complete your digital tasks, there are undoubtedly more competitors out there vying for Microsoft’s crown as King of Software.

With the likes of Apple, Google and it’s ubiquitous, cloud-based docs platform, and the many different devices such as smartphones and tablets, you’d be forgiven for thinking that Microsoft may somehow have become less relevant.  Whilst it’s true that Microsoft must now share its tech amongst a plethora of other competing software products, it is arguably still in full control of its throne and in no mood to give it up. 

In this article, we look at why Microsoft is still very much the King of Software.


Kings of The Office

Microsoft has stuck firmly at doing what it does better than everybody else. Sure, you can use Open Office for free, or leverage Google’s cloud-based docs platform, or maybe you run an Apple computer and output your written work in Pages. But regardless of which platform you use, if you want to be collaborative in the real business world, chances are you need to convert your document to DOCX (Microsoft’s default file type) before you can start sharing it amongst clients and colleagues, safe in the knowledge that they can open and read what you’ve sent. This is because when it comes to business software, Microsoft is still the standard, and it doesn’t look like it’s going to get challenged anytime soon. 

Kings of The Cloud

Microsoft are always looking for ways to add value to an already established platform with a huge user base and plenty of support; for example, they have moved towards a cloud-based system, supported by ms volume licensing, ensuring that users can access their content from any location. This supports the modern trend for an increasingly flexible working environment, as well as work collaborations (ensuring that Google no longer get to dominate that space).

And Microsoft are doing a lot of this affordably – for free, in some cases – in order to remain accessible and affordable to a bigger market. This shows strength through flexibility and adaption, suggesting they are committed to continuously improving their product alongside whatever a competing organization can offer.


Kings of The Future

As Microsoft continue to push their ubiquitous operating systems across multiple platforms, including mobile, and focus heavily on cloud-based solutions, they will continue to enhance their office software applications. They will also continue to work on virtual reality (VR) and actual reality (AR) solutions that leave the traditional office and disrupt the manufacturing and medical industries, alongside their work in artificial intelligence (AI), which rivals anything being worked on by Google, Facebook or Amazon. Microsoft is far from losing relevancy, and it’s sitting proudly on its throne, safe (for now) from a revolution.

Does a Better Sleep Pattern Lower Your Risk of Stroke and Heart Disease?

There’s no denying that you nearly always feel better after having a full night’s sleep. But according to this latest study, published in the European Heart Journal, it’s not just the odd night that you should fully recharge. Getting a good night’s sleep on a regular basis could be the key to living a long and healthy life. 

The study, led by Dr. Lu Qi, director of the Tulane University Obesity Research Center revealed that even those people who had a high genetic risk of stroke or heart disease could lower that risk by adopting healthy sleep patterns on a regular basis.


Comparing a number of genetic variations that were already linked to these health problems, to more than 350,000 healthy variations, the researchers found that those who had good sleeping habits had a 35% lower risk of developing cardiovascular disease and 34% lower risk of both strokes and heart disease. The researchers also found that those who had the healthiest sleep patterns were those who slept for 7 or 8 hours a night, with no snoring, insomnia, or daytime drowsiness.  

Upon looking at the combined effecting of sleeping patterns and the genetic susceptibility of cardiovascular disease, it appeared that those with both high genetic risk and a poor sleep pattern increased their risk of a stroke by 1.5-fold and heart disease by more than 2.5-fold compared to those with low genetic risk and healthy sleeping pattern. On the flip side, a person with a high genetic risk but a healthy sleep pattern had just a 1.3-fold increased risk of a stroke and a 2.1-fold increased risk of heart disease. Those who had a low genetic risk and an unhealthy sleep pattern had a 1.6-fold increased risk of stroke and 1.7-fold increased risk of heart disease. 

While these results only indicate an association of the link, further studies are needed to confirm this.   

Internet Access Evolution: The Shift from Desktop to Mobile

The pace of technological change is driven both by innovation and by the demands of those who use that technology. User-driven change has been particularly notable in the way that most of us choose to access the internet. Whether you’re one of the new wave of internet gamblers trying to redeem a Unibet online casino bonus code or an online gamer catching up with the latest esports tournament news, in 2020 you are more likely to be using a mobile than a desktop device. 

The wider internet revolution continues to spread around the globe and each year more people than ever are going online, but within that revolution, there has been a second, more dramatic shift, as we opt to use our mobile devices to access the internet rather than our PCs. 


Rapid Shift to Mobile

It is not exactly clear when we passed the point where mobile internet use outstripped desktop access, but we have certainly passed that tipping point. Back in 2013, mobile phone usage accounted for around 16% of online traffic around the globe. That figure rose dramatically and reached 52% for 2018. But this only represents mobile phone usage. Add in the impact of other mobile devices, such as tablets, and that figure is likely to be way higher. 

Technological Change meets User Demand

Before iPads, smart-watches, and PDAs, mobile phones were simply that: devices that you could carry around with you for making telephone calls. The first commercially available mobile phones dated back to the 1980s, but they were bulky and expensive devices. Mobile phone technology moved slowly. Text messaging was launched in the early 1990s, downloadable ring tones and emojis followed at the end of the decade, and Japan produced the first camera phone in 2000. 

The Blackberry range of phones, which debuted in 1999, made it possible to send and receive emails through a mobile device and was rapidly adopted across all sectors of business, however, it was the development of the 3G standard in the early 2000s that made accessing the internet through a mobile device practical for most users. The era of the smartphone arguably began in January 2007, when Apple launched the iPhone, and mobile technology has gone from strength to strength ever since. 

Why Mobile?

So why are so many of us choosing our mobile devices rather than our desktops to access the internet? The answer is convenience. A mobile device enables you to check your emails, do your shopping, play online games, read the news and chat online with other people around the world, all while you’re commuting on the train, relaxing at the beach or sitting in your garden. Compared to the hassle of having to get home or to the office before you can fire up your desktop, the mobile device offers quick, easy and convenient internet access. 

Changing Behavior

But just as user demand has driven mobile technology, so the technology is changing our behavior. Not only are we using our mobiles rather than our desktops to access the internet, we’re also spending longer online. Figures vary, but back in 2013, the average time spent consuming various types of media on a mobile device was around 90 minutes. But by 2018, this had risen to over 200 minutes. Is that because we are finding the internet more and more useful or because we are becoming addicted to our mobile devices? After all, the first Blackberry phones were nicknamed ‘Crackberry’ for the way that they seemed to change some users’ behavior. 


Desktop Still Rules on Conversion

Curiously, there is still one area in which mobile devices lag behind desktop computers, and that’s in sales conversion. Figures show that although mobile devices account for the majority of user visits to retail sites, the percentage of visits that result in a purchase is lower for mobiles than it is for desktops. There is evidence that the gap is closing, but it seems that, for the time being at least, we still prefer to use our desktops to make significant purchases. 

Mobile Devices and the Future

There is no sign that the growth in mobile usage is slowing down, in fact, driven by expanding demand in major developing markets, most notably China and India, global mobile phone use is expected to hit 4.78 billion in 2020. If that trend continues, and the technology continues to make mobile devices ever more accessible and powerful, we may be heading for a world in which the humble desktop is effectively obsolete. 

Photo by NASA

Looking to Boost Your Attention Span? Then Turn Down Your Alpha Brain Waves!

A new study carried out by MIT neuroscientists found that people can improve their attention span by way of managing their own alpha brain waves when performing certain tasks. The research demonstrated that once people were able to suppress the alpha waves in one side of the parietal cortex their attention span was better regarding objects located on the opposite side of their visual field. 

This research is the first of its kind as it demonstrates a cause-and-effect relationship never seen before – one that suggests we can improve our attention through neurofeedback. Using neurofeedback is a non-invasive way of monitoring and controlling brain activity under different circumstances. While it’s still not clear how long these effects might last it does show promise in the potential to help those suffering from either lack of attention or some other kind of neurological condition. 


Alpha waves have a brain wave frequency of around 8 to 12 hertz and are thought to be involved in the filtering of distracting sensory information. Previous research in both humans and animals, has revealed a strong relationship between alpha brain waves and attention. However, researchers are still unsure if alpha waves are responsible for controlling attention or are simply a byproduct of something else that regulates attention.   

To test this theory, the researchers devised an experiment where subjects were asked to complete a task real-time feedback on their alpha waves was relayed. They had to look in the center of a screen where a grating pattern was displayed and use mental effort to make the pattern more visible by increasing the contrast of it. As the subjects did this, researchers measured their alpha levels using magnetoencephalography (MEG). Alpha levels from both the right and left hemispheres of the parietal cortex were measured in order to calculate the asymmetry between them. 

The experiment showed that the more asymmetry between the two hemispheres increased, the more visible the grating pattern. While subjects were not consciously aware of the way in which they were controlling their brain waves, they still managed to do it. And this success could clearly be seen by enhanced attention on the other side of the visual field. While subjects looked at the pattern on the screen, dots of light were flashed either side. Even though they were told to ignore these, researchers measured how each participant’s visual cortex reacted to them. 

Half of the subjects were trained to suppress the left side of the brain’s alpha waves, while the other half learned to suppress the right side. Those with reduced alpha waves in the left hemisphere exhibited a larger response in their visual cortex when shown flashes of light on the right-hand side of the screen, while the opposite effect was seen in those with reduced alpha waves in the right hemisphere. “Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” says director of MIT’s McGovern Institute for Brain Research and senior author of the paper, Robert Desimone.      


On completion of the neurofeedback training session, subjects were asked to carry out two further tasks that involved attention. One test involved the subject performing a similar experiment to that previously where they stared at a pattern on a screen. Some were told to look in a certain direction while others were not given any direction. Those that were given clear instructions to look one way, mostly looked that way. Those that had no instruction tended to look more to the side favored during the neurofeedback training. 

The second test involved the subjects looking at an outdoor scene or some other kind of computer-generated image. In tracking their eye movements, the researchers found most people drawn towards the side their alpha waves were trained to pay attention to previously.  

So, it seems, results from both of those tests showed that enhanced attention continued. However, further research is needed to determine how long those effects may last.      

MIT Researchers Improve Conductive Material Tenfold

Researchers at MIT have been working with various materials looking for ways to improve their electrical conductivity. Working with one particular clear, conductive coating, they may have just solved the issue – one which could potentially increase the efficiency (and stability) of solar cells and touch screen technology tenfold. 

The material most commonly used in things such as solar cells and touch screens is indium titanium oxide – otherwise known as ITO. And while it does do the job of acting as a good conductor of electricity, its structure is quite brittle and easy to crack. While one of the same researchers on this project, Professor Karen Gleason, and colleagues, did improve a version of this material a couple of years ago, it was nowhere near as conductive or transparent as ITO. The new structured material, however, is much better.


Siemens per centimeter is the measurement unit used when calculating the combined transparency and conductivity of a material. ITO ranges from between 6000 to around 10000 Siemens per centimeter, which is pretty incredible. In this new research, the aim was to find a material that could reach a minimum of 35 – it reached a staggering 3000. And even still, there’s room for improvement, say the team.

The new, flexible material is called PEDOT. It’s an organic polymer that’s spread out in an ultra-thin layer via a process known as oxidative chemical vapor deposition (oCVD). This is what gives the material such as high conductivity. To demonstrate this, the team applied a layer of PEDOT into a solar cell. The new combination proved to be very complementary. Not only did the efficiency of the solar cell improve, but its stability doubled also.

The illustration shows the apparatus used to create a thin layer of a transparent, electrically conductive material, to protect solar cells or other devices. The chemicals used to produce the layer, shown in tubes at left, are introduced into a vacuum chamber where they deposit a layer on a substrate material at top of the chamber.
Illustration courtesy of the authors, edited by MIT News

While these initial tests involved substrates just 6 inches in diameter, the team is confident it can be applied on a much larger scale during manufacturing processes. Creating the oCVD and applying it to substrates is a relatively easy process that’s ideal for use on things such as flexible solar cells and screens. It’s a breathable material that can coat even the finest contours of an object. 

The team will still need to demonstrate the material’s efficiency on a larger scale and show it’s efficiency levels over time, but overall, it’s a very good result indeed. 

Introducing Polymer Producing Robots

Another success for the world of robotics this month comes from a Rutgers-led team of engineers who have developed an innovative way of producing polymers through the use of automated technology. Using robots in such a way will make the development of many advanced, health-improving materials much easier. 

Synthetic polymers are used a lot in advanced materials. And while a good human researcher can probably create a few polymers per day, this new robotic system can churn out up to 384 different polymers at the same time. The use of polymers to develop new technologies is crucial. They’re used in various technologies including electronics, lighting, sensors, diagnostics, and medical devices. 


“Typically, researchers synthesize polymers in highly controlled environments, limiting the development of large libraries of complex materials, “explains assistant professor in the Department of Biomedical Engineering at Rutgers University, New Brunswick, and senior author of the paper, Adam J. Gormley. “By automating polymer synthesis and using a robotic platform, it is now possible to rapidly create a multitude of unique materials.”

A Rutgers-led team adapted advanced liquid handling robotics to perform the chemistry required for synthesizing synthetic polymers. This new automated approach enables the rapid exploration of new materials valuable in industry and medicine.
CREDIT Matthew Tamasi

As well as helping researchers make materials, robotics also aid in the discovery and development of various drugs. The problem is that synthesizing polymers can be tricky as most chemical reactions need to be done without oxygen present. However, Gromley’s platform works on reactions that aren’t volatile when mixed with oxygen which gives his team one big advantage over others – the fact that even non-experts can now create polymers in just a few simple steps.