Thursday, November 28, 2024
8.3 C
New York
Home Blog Page 46

Breakthrough in Electron Microscopy Sets World Record for Image Resolution 

Electron microscopy has allowed scientists to get a much closer look at the properties of individual atoms. The problem is, that even at this resolution, things are still a little fuzzy. Like lenses, electron microscopes have tiny imperfections known as aberrations, and in order to smooth out, these defects scientists must use special aberration correctors.


However, there’s only so much aberration correctors can do and to correct multiple aberrations you would need an endless collector of corrector elements. Thankfully that’s where David Muller, the Samuel B. Eckert Professor of Engineering in the Department of Applied and Engineering Physics (AEP), Sol Gruner, the John L. Wetherill Professor of Physics, and Veit Elser, a professor of physics come in.

Together the scientists have come up with a new method of achieving ultra-high resolution for their microscope without the need to use any aberration correctors. Using an electron microscope pixel array detector (EMPAD) that was first introduced back in 2017, the team achieved a world record for image resolution using one-atom thick molybdenum disulfide (MoS2). Electron wavelengths are much smaller than visible light wavelengths. The problem is that electron microscope lenses are not very accurate. 

Electron microscope detector achieves record resolution
A ptychographic image of two sheets of molybdenum disulfide, with one rotated by 6.8 degrees with respect to the other. The distances between individual atoms range from a full atomic bond length down to complete overlap.


However, Image resolution in electron microscopy has improved recently by increasing the amount of energy that makes up the electron beam and the numerical aperture of the lens. The end result: a well-lit subject. Previously scientists achieved records in obtaining sub-angstrom resolution through the use of a super-high beam energy and an aberration-corrected lens. Atomic bonds are typically between 1 and 2 angstroms in length so the sub-angstrom resolution would enable scientists to get a very clear picture of individual atoms.

Eventually, the group achieved a resolution of 0.39 angstroms, making a new world record; one at a less damaging beam energy. The team used both ptychography and the EMPAD to achieve these results. Setting the beam energy at just 80 keV the microscope is able to pick up images with the greatest of clarity. With a resolution capability this small, the team needed a new test subject for the EMPAD method.


Stacking two sheets of MoS2 on top of one another, Yimo Han and Pratiti Deb set to work making sure one sheet was slightly askew so that atoms in each sheet were within visible distances of each other. “It’s essentially the world’s smallest ruler,” says Gruner. The EMPAD is capable of recording a wide range of intensities and has now been fitted on various microscopes all across campus.

More News to Read

How Drones are Changing the World as We Know It

Over the past few years, the use of drones in both a commercial and a personal environment has risen considerably. One area, in particular, that seems to have benefited largely from the introduction of drones is within the realms of data collection. Through the use of these robotic flying machines, teams can monitor and survey large areas of land more efficiently without impacting any of the landscape or their own safety. 


However, some people do have some concerns over the potential safety, ethics, and security issues that may come with these drones. To help ease some of those concerns, Oxford University’s Centre for Technology and Global Affairs teamed up with various drone companies to discuss the potential models for governing these devices while in the skies. 

These were also some of the things discussed at the very first ‘Robotics Skies Workshop: The Role of Private Industry and Public Policy in Shaping the Drones Industry’. It was a chance to get policymakers, practitioners, and experts from the government, industry, and academia to discuss the upcoming regulatory challenges faced by the Unmanned and Autonomous Aerial Vehicles (UAVs & AAVs) industry. 


Some of the main points discussed were things like advancements in geofencing, the use of drones as a service, raising awareness as to the safe use of drones, and Unmanned Traffic Management (UTM). Key speakers at the event included Jessie Mooberry, Head of Deployment with Altiscope from Airbus, Matthew Baldwin, Deputy Director-General of Mobility and Transport from the European Commision, and Christian Struwe, Head of Public Policy Europe at DJI.

“I am pleased to see that the first workshop on the future of autonomous drones perfectly fulfills the Oxford Centre’s long-term vision to serve as a powerful policy-building hub for the beneficial development of breakthrough technologies,” says Artur Kluz, the Centre’s Founding Donor. Jessie Mooberry commented: “By convening industry, government, academia, and civil service, Robotic Skies fostered necessary deep and wide conversations about the role of automation on our airspace as well as the physical and digital infrastructure required to enable this future.”

The Centre provides a place where collaboration takes place and resolutions are found. Robotic Skies is a great way to bridge the gap between researchers and policymakers across the world and is the first of many planned events to be held at the Centre.


More News to Read

Engineers Develop New System for Signaling Out Certain Sounds in Music Videos

Using artificial intelligence (AI), engineers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that can extract just one instrument sound from a whole video. Not only that, the deep learning system can also make those sounds louder or softer as required.

The system is completely self-sufficient and needs no human controller in which to do its job. Named, the “PixelPlayer”, it can identify certain instruments at the pixel level and then isolate any sounds that are linked with that instrument. Having the ability to do this means that someday we could potentially see huge improvements in terms of audio quality at concerts.


In a new paper, the researchers demonstrated how PixelPlayer can isolate more than 20 different instrument sounds. And, they’re quite confident that with more training, the system could easily identify more. However, they do say that the chances are that the system will still struggle with telling the difference between subclasses of instruments (i.e tenor sax versus alto sax).

While previous attempts at separating the individual sources of sound, most have focused purely on audio. The problem with this is that it requires a lot of human labeling. PixelPlayer, on the other hand, brings the vision into the mix, making human labeling unnecessary. The way it works is by locating the image regions responsible for making a particular sound then separates any input sounds into elements that represent each pixel’s sound. 

“We expected a best-case scenario where we could recognize which instruments make which kinds of sounds,” says Zhao, a CSAIL Ph.D. student. “We were surprised that we could actually spatially locate the instruments at the pixel level. Being able to do that opens up a lot of possibilities, like being able to edit the audio of individual instruments by a single click on the video.” 


The way PixelPlayer works is by using neural networks and deep learning techniques to find patterns in data. Researchers are calling the system “self-supervised” as they don’t yet fully understand every part of how it can learn every related sound to these instruments. But, Zhao does say that he can tell when the systems identify certain aspects of the music. For example, fast, pulsing patterns tend to be linked to instruments such as the xylophone, while smoother, more harmonic frequencies tend to correlate to instruments such as the violin.

More News to Read

Scientists Develop New Technique to Study Antibiotic Resistance  

It’s quite an alarming thought that antibiotic resistance is spreading worldwide, but it is. Thankfully, a group of EMBL researchers has developed a technique that helps them study the melting behavior of proteins, which in turn helps them better study bacteria. 

The technique is called thermal proteome profiling (TPP). It was developed back in 2014 and enables scientists to compare the melting behavior of all the different proteins of a cell or organism before and after an event such as the administering of a drug. By changing TPP to work alongside bacteria, it can now be used to look at the activity and structure of proteins while they’re present in a live bacterial cell.


Where we as humans stop functioning when our bodies reach temperatures of 42 degrees Celsius and higher, bacteria such as E. coli, is just getting started. “We discovered that proteins in the middle of a bacterial cell are less tolerant to heat than those at the cell surface,” says Mikhail Savitski. “Surprisingly, a protein’s location is more predictive for its melting behavior than which other proteins it interacts with.” 

Scientists Develop New Technique to Study Antibiotic Resistance  
Scientists Develop New Technique to Study Antibiotic Resistance, Aleksandra Krolik / EMBL

With TPP, the researchers can also look at how different drugs affect bacteria. Those drugs that interact with the proteins in the body typically have higher melting points and work by increasing the protein’s heat tolerance. Therefore, by comparing the heat tolerance levels of untreated bacterial cells to treated bacterial cells the researchers can identify targets of antimicrobial drugs and figure out how the bacterial cells either gives in or tries to fight the drug. 


“In one particular case, we were able to elucidate a novel drug resistance mechanism, “says Andre Mateus, lead author of the study and a postdoc working at EMBL. “Cells use proteins to pump antibiotics out of the cell. After genetically removing one such efflux pump from their chromosome, bacteria became more sensitive to many drugs but curiously more resistant to one specific antibiotic called aztreonam. Using TPP, we found that this was due to dramatically reduced levels of a specific porin – a protein that acts as a pore – used by aztreonam to enter the cell.”

Using TPP allows scientists to study the effects of perturbations much quicker than other techniques. Many of the insights obtained using this technique would be impossible to gain using conventional techniques, proving just how vital TPP is for studying bacteria in great detail.


More News to Read

What are Skill Based Slot Machines?

With the rise of online casinos, Playstations, Xbox, and other types of gaming that are more conveniently played at home or online, actual casinos and traditional slots especially to the younger generation or what we call Millennials. The more advanced technology is, the more people realize that it is actually more convenient to play online games rather than to physically go to casinos where you can still reap the same benefits.

Casinos have no choice but to keep on innovating and coming up with ways to make casino games more appealing to the public. Skill-based slot games open an avenue for creativity when it comes to casino and slot games. It is a next step for the Millennials who are used to gaming on phones, tablets, Xbox, PlayStation, etc. At present, they don’t show the desire to play in a casino environment. This is the step to get casinos closer to their goal.


Skill-based slots are gambling machines where the biggest factor in winning depends on the player’s ability to play the game. The outcome of the game is based on skill instead of chance. They also allow game developers, operators, and suppliers, to create variable payback based on a wide variety of identifiers. They reward better players by giving them a higher payback. These are what differentiate skill based slots from the traditional slot games. In a traditional slot game, there is hardly any skill involved.

You usually choose your stake and then inform the machine when to spin. You simply hope that luck would be on your side. With skill-based slots, you can boost your payout. Players will know that they will have a material effect on the outcome of the game and a material effect on how much money they can win. The payback percentage would be affected by how well they play the game.

Slot games and casinos give the highest rewards to the best customers and by best customers, it refers to the customers who make big bets per game. The higher the bet, the higher the prize is. The rewards are usually based on total wagers. Given this, wouldn’t it make sense for casinos to give a higher payback percentage to the most valuable players who are actually placing more wagers or higher bets?

Skill-based slots have been there for years although they haven’t succeeded in taking over the market and in convincing traditional slots players to play these new generation slots games. They have not progressed a lot or changed dramatically because of the regulatory confines. Laws and regulations hindered them from circulating. Some people find skill based unfairly because there are some players who are at an advantage with these kinds of casino games and it doesn’t look good. According to the rules and regulations, slots should remain as a game of chance.

Another reason is that with skill-based slots, there are variable payback percentages. In the traditional slots, a player wouldn’t know how much he or she will be getting until the end of the game. With skill-based slots, a player would already have an idea about how much he or she will be getting as payback. Players will know what the percentage of feedback is and can be. Given this circumstance, skill-based slots defeat the purpose of games played in casinos. Casino games are games of chance. Skill-based slots definitely change the direction of casino games. 


Casino games have a reward system for its players. Traditional slot machines are programmed in such a way that it accurately registers the total wager of a player and rewards a player based on his or her total wager no matter how big or small his or her bets are. 

Skill-based slots are now making their rounds in casinos in order to attract gamers and gamblers to play slots. They make slot machines and traditional casino games more appealing. Here are some of the skill-based slot games that have been making their rounds in casinos:

    1. Danger Arena / Pharaoh’s Secret Temple

2 – Space Invaders

With skill-based slots, the machine isn’t aware how much you have wagered. It doesn’t take into consideration how much you have bet. There can be any outcome regardless of your player rewards status or your bet size. If you are a traditional slots player, you may find this unfair because you have invested so much yet you are not reaping of the benefits of what you have invested in unless you are really skilled in that particular game.

Skill-based slots will keep generating numbers and the machines won’t care whether or not you have made big bets. The outcomes are random and they won’t be affected by any factor such as you being a bigger player. This is the reason why despite the rise of skill-based slot games in casinos or online, traditional slot players still prefer the traditional slots games over the skill based ones. The skill-based slot games are only appealing to the younger generation because of their competitive nature.

Skill-based slots may be accepted someday but it definitely will not replace traditional slots games. It will have a league of its own.


More News to Read

5 Ways Artificial Intelligence Can Make the Construction and Engineering Sector More Efficient

It is not a secret that technology has made its way into many different sectors, even the highly specialized ones. This even more true thanks to AI technology, that is slowly but surely making its way into many different areas. One of the latest applications of artificial intelligence is in construction engineering. The addition of AI in the civil engineering world has had a great success already, even though it is not as fast of an addition as it is in other kinds of sector.


The adding of artificial intelligence to the civil engineering field has been slow, but steady. The speed of things is due to the fact that although some aspects of a project can be bettered with AI, the big bulk of things still literally lies in the hands of actual people. One of the oldest practices still around, the laying of bricks cannot simply be replaced with artificial intelligence. At least, not for the time being. 

Nevertheless, AIs have indeed helped the initial steps of projects, in that they are used when algorithms are needed to calculate possible challenges, and improve the productivity and efficiency. In fact, the companies that have already started implementing artificial intelligence were shown to be more likely to profit 50% more than the ones without. 

With 5 distinct applications that the AIs can be used to speed projects up, civil engineering can literally begin saving time, and therefore, money. The first application is known as the “reinforcement learning” which means the AI can understand what the best way of doing something is, so helpful when it comes to planning and scheduling. The second application is “predictive applications to forecasting project risks” which like the name states, helps understand what how viable solutions are. The third application is “supervised learning applications for modularization and prefabrication” which basically means keeping tabs on the supply chain. The fourth one is “machine learning” which, as the name suggests is how robotic arms learn the steps to prefabricate materials and/or for general maintenance. 


And last but not least, the fifth AI application is the “image recognition”. The fifth application works with drones and three-dimensional imagery and is used for quality control for the project. The images gathered to show what the construction will look like after it’s completed, and therefore allow the engineers to correct any possible mistakes and/or modify the project as it develops. 

With the engineering and construction (E&C) field worth more than $10 trillion a year, the addition of AI is seen as further progress to a sector that remains critically under-digitized. The consulting agency, McKinsey & Company have stated that the E&C are only investing about 1 percent in the technology sector, putting it way behind many of its other business competitors. The time has come for the civil engineering and construction sectors to embrace technology and the many advantages it has to offer. 


More News to Read

It’s Raining on the Self-Driving Car Parade

The development of the self-driving car industry is advancing incredibly fast. No longer is Tesla the only ones daring to test the trend. Now, General Motors, Uber, and even Google have their own working prototypes roaming the roads. It appears that they’re gaining popularity quickly too, with almost two-thirds of Millenials stating they’d be happy to own a self-driving car within the next 10 years. And while this all sounds quite promising, there’s one big problem: rain.

“We have computer vision algorithms which can be used by cars. But if you train a vehicle using the current algorithms, they don’t perform so well in adverse weather conditions,” says the lead author of the ‘Raincouver’ study, Fred Tung. Self-driving cars learn through the use of machine learning algorithms. Manually labeled images are fed to the computer at a per-pixel level in order for it to learn to recognize them. This process is known as semantic segmentation. 


While there are several datasets out there available for self-driving cars, most were shot under near perfect conditions (i.e. good weather and daylight). But Tung and his team wanted to know how the algorithms would perform when using footage taken in the rain. So, they decided to attach a camera to the dashboard of a car and then proceeded to drive around Vancouver gathering footage. Every six seconds the team labeled the people, roads, and vehicles on screen. They named the project ‘Raincouver’.

It appeared that the rain really had an effect on the way in which the algorithms worked. Streets looked different; there was a glare from the windshield and headlights that sometimes obstructed the view, and the computer found it difficult to distinguish people from the dark background they’re standing in. Humans have difficulty seeing in the dark, so how can we expect a computer to be any different?


Vancouver gets around 160 rainy days per years so if self-driving cars are unable to cope in rainy conditions, there’s going to be a big problem. Thankfully Tung and his team have come up with some ideas as to how to get around this issue. The first thing they’re proposing is to expand the datasets they’re using to train the vehicles. Capturing more footage in more cities under different conditions would enable the system to learn much more. The researchers even suggested adding things like trees and buildings to the data set to help cars navigate. 

Whether you’re for or against them, self-driving cars are coming. Tung believes they could help solve the congestion problems of many cities or help the elderly or those with impaired vision get to where they need to, safely. And, computers won’t get distracted by the radio or phones like people can be. It’s clear that these weather issues need to be addressed before self-driving cars will be released to the mainstream public. But, it is coming. So, fasten your seatbelt and get ready for the ride.


More News to Read

How a Collision with the Sausage Galaxy Made the Milky Way What it is Today

Around 8 to 10 billion years ago, astronomers predict that our very own Milky Way galaxy had a head-on collision with a much smaller object, nicknamed the ‘Sausage’ galaxy. And, it’s this cosmic collision that scientists believe is responsible for reshaping the structure of the Milky Way, including its out halo and prominent inner bulge. 

Unfortunately, the dwarf galaxy did not survive the collision. It was quick to fall apart and disperse. According to Vasily Belokurov of the University of Cambridge and the Center for Computational Astrophysics at the Flatiron Institute, the dwarf was literally torn to shreds during the collision, leaving its stars spinning in orbits very close to the center of our own galaxy. “This is a telltale sign that the dwarf galaxy came in on a really eccentric orbit and its fate was sealed,” says Belokurov.


Using data from the European Space Agency’s Gaia satellite, graduate student Gyu Chul Myeong and colleagues successfully outlined the details of this extraordinary event in several papers in the Monthly Notices of the Royal Astronomical Society, The Astrophysical Journal Letters, and arXiv.org.

The Gaia satellite has been mapping out the contents of our galaxy for many years now and as a result of astronomers now know the location and trajectories of our celestial neighbors with great precision. It was the shape of the path of stairs leading from the merger that gave it the nickname “the Gaia Sausage”, says Wyn Evans of Cambridge. “We plotted the velocities of the stars, and the sausage shape just jumped out at us. As the smaller galaxy broke up, its stars were thrown onto very radial orbits. These Sausage stars are what’s left of the last major merger of the Milky Way.”


While there have been other galactic collisions with our own Milky Way, none have been as significant as the Sausage Collison. Including all its gas, stars, and dark matter the total mass of the Sausage galaxy was more than 10 billion times that of our sun. When it crashed into the Milky Way the impact caused a lot of damage, causing the galaxy to have to regrow. 

In simulations of this event, stars from the Sausage galaxy enter elongated orbits. Evidence of this can be seen in the paths of those stars that have been inherited from the dwarf galaxy. “The Sausage stars are all turning around at about the same distance from the center of the galaxy,” says Alis Deason of Durham University. When the stars do this it causes the density in the Milky Way’s halo to decrease considerably. 

More News to Read

How the Brain Understands What We See and Knows the Right Action to Take 

When you see something you want in the store, you reach for it. When you notice the traffic lights begin to change green, you put your foot down. And while these things seem like completely natural responses to everyday occurrences, there’s a lot more going on in the brain than we give it credit for. 

A new study carried out by researchers from MIT’s Picower Institute for Learning and Memory proves how crucial one particular part of the brain is when it comes to transforming seeing into doing. That region is called the posterior parietal cortex (PPC). 


How the Brain Understands What we See and Takes Knows the Right Action to Take Action 
PPC neurons, engineered to glow when active, flicker in response to a mouse seeing a visual stimulus and deciding whether to respond with a licking motion. Image: Sur Lab / Picower Institute

As explained by senior author Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience in the Department of Brain and Cognitive Sciences, vision is where it all begins. But then that visual data has to be transferred into motor commands. Sur is hoping that the new study will help to explain why some people who have suffered from stroke or brain injuries experienced a problem called “hemispatial neglect”. This is where people are unable to act upon any objects lying on one side of their field of vision. They can often see the item, their brain just doesn’t recognize the need to do anything.

The study involved using mice to pinpoint exactly how PPC sprang into action. And what they showed was that it contained a mix of neurons that was accustomed to the process visuals, make decisions, and take action. “This makes the PPC an ideal conduit for flexible mapping on sensory signals onto motor actions, says co-lead author of the study, Gerald Pho, former graduate student in the Sur lab who’s now at Harvard University. 

During the trial, the mice were given one simple task to compete and that was if they saw a striped pattern move upward on the screen, they were to lick a nozzle to receive a liquid reward. If the stripes moved sideways, they were to not lick. If they did, they would get a bitter liquid as opposed to the reward. As they carried out the test, researchers recorded the neuron activity of the mice in two areas of the brain: that which processes sight, the visual cortex; and that which receives input from the visual cortex and other regions, the PPC. 


The researchers found that in both regions the cells glowed more brightly after becoming active, indicating quite clearly at what stage they got involved. Those in the visual cortex seemed to light up when a pattern emerged and moved, whereas those in the PPC showed more variety in the way they responded. A few of them (around 30 percent) acted in the same way as the visual cortex neurons, becoming active when a pattern moved the right way. But most were randomly responsive. Not just about seeing something, but in the chance to act upon it also.

“Many neurons in the PPC seemed to be active only during particular combinations of visual input and motor action,” explains co-lead author Michael Gourd, a former MIT postdoc now at UC Santa Barbara. “This suggests that rather than playing a specified role in sensory or motor processing, they can flexibly link sensory and motor information to help the mouse respond to their environment appropriately.” However, even the odd error was instructive,  suggesting that a lot of PPC neurons are in fact oriented towards acting. 

To fully confirm their theory, the researchers then changed the rules of the test slightly, this time having the nozzle drip out on recognition of the sideways pattern instead and to deplete the bitter liquid when the stripes moved upwards. So the mice were still seeing the same patterns, but this time the rules had been reversed. 


Looking at the same regions within the brain, the researchers saw that the visual cortex neurons were the same in terms of their activity levels. The PPC neurons, however, changed their responses completely. Those that were actively selected to respond to upward moving stripes were now responding to the sideways ones instead. This showed that there was a direct correlation between learning going on at the cellular level. 

“If you flipped the rules of traffic lights so that red means go, the visual input would still be driven by the colors, but the linkage to motor output neurons would switch, and that happens in the PPC,” says Sur. It seems that these studies fully support earlier research into this functioning and the researchers are confident their results will be used in further analyses of PPC function.

More News to Read

Neuroscientists Shed Light on the Role of Certain Genes Associated With Alzheimer’s

Following a recent study conducted by a group of MIT neuroscientists, it’s been noted that people with the APOE4 gene variant are more likely to develop late-onset Alzheimer’s disease than the general population. It’s also thought to be around three times as common in Alzheimer’s patients opposed to those free of the disease. 

Until just recently, researchers knew very little as to why APOE4 posed a higher risk for contracting Alzheimer’s. After conducting a comprehensive study of both APOE4 and the more commonly found APOE3 gene, researchers found that the former promotes the buildup of the beta-amyloid proteins that cause issues in the brains of those with Alzheimer’s.


IT Neuroscientists Shed Light on the Role of Certain Genes Associated With Alzheimer’s
In this 3D brain “organoid,” microglia-like cells, labeled in red, fail to properly clear amyloid proteins (green) from the brain tissue. (Courtesy of the researchers)

“APOE4 influences every cell type that we studied, to facilitate the development of Alzheimer’s pathology, especially amyloid accumulation,” explains Li-Huei Tsai, senior author of the study and director of MIT’s Picower Institute for Learning and Memory. What the researchers also discovered was that they could wipe out the signs of Alzheimer’s all together in brain cells with APOE4 by transforming it into APOE3 instead. 

The full name for APOE is apolipoprotein E and it comes in a 2, 3, and 4 variant. This gene binds to lipids and cholesterols in order to help the cells absorb the lipids. Astrocytes are the cells responsible for producing lipids within the brain. These lipids are then secreted and absorbed by neurons with the help of APOE.

It’s estimated that around 8 percent of the general population has APOE2, 78 percent has APOE3, and 14 percent have APOE4. However, the statistics for those with late-onset Alzheimer’s are much different with only 4 percent having APOE2, 60 percent having APOE3 and a hugely increased APOE4 at 37 percent.

“APOE4 is by far the most significant risk gene for late-onset, sporadic Alzheimer’s disease,” says Tsai. “However, despite that, there really has not been a whole lot of research done on it. We still don’t have a very good idea of why APOE4 increases the disease risk.” Using human induced pluripotent stem cells, the team managed to stimulate those stem cells to separate into three kinds of brain cell: astrocytes, neurons, and microglia. 


Then, using CRISPR/Cas9 the researchers converted APOE3 to APOE4 in stem cells. Because the cells are exactly the same apart from the APOE gene, the team were able to identify any difference between them that could be a result of the gene. In neurons, they found that cells expressing APOE3 and APOE4 differed quite substantially with around 250 that went down and 190 that went up in those cells with APOE4. The levels of astrocytes were higher than that. But, the highest change of them all was found in the  APOE4 microglia with more than 1,100 genes showing reduced activity and around 300 showing more.

As well as genetic changes, the researchers also saw changes in the cell’s behavior. They found APOE4 neurons formed more synapses while secreting higher levels of amyloid protein and they discovered that APOE4 astrocytes have dysregulated cholesterol metabolism. Microglia were also affected in a similar way becoming much slower at removing foreign matter when they had the APOE4. 

The researchers developed 3D miniature brain models from cells known to cause early-onset Alzheimer’s in another experiment. These organoids were high in terms of amyloid aggregate levels but almost completely cleared away when exposed to APOE3 microglia. APOE4 microglia, on the other hand, had no clearing effect at all. 

IT Neuroscientists Shed Light on the Role of Certain Genes Associated With Alzheimer’s
A microglia-like cell grown from human cells expressing the APOE4 protein. (Courtesy of the researchers)


In conclusion, Tsai believes that APOE4 may be responsible for disrupting specific signaling pathways, which in turn, leads to the changes in the cell’s behavior demonstrated during the study. “From this gene expression profiling, we can narrow down to certain signaling pathways that are dysregulated by APOE4,” says Tsai. “I think that this definitely can reveal potential targets for therapeutic intervention.”

The findings of this study certainly do suggest that gene-editing is an effective way of treating Alzheimer’s patients who have the APOE4 gene. “If you can convert the gene from E4 to E3, a lot of the Alzheimer’s associated characteristics can be diminished,” says Tsai. 

More News to Read