Thursday, November 28, 2024
6.6 C
New York
Home Blog Page 47

Researchers Develop Synthetic T-Cells That are Almost an Exact Replica of Human T-Cells

The art of being able to recreate near-perfect replicas of the human T-cell is a pretty astonishing feat that may help us get one step closer towards more effective cancer and autoimmune disease treatments. Ir could also help us gain a better understanding as to how human immune cells behave. Eventually, these manufactured cells could be used to strengthen the immune system for those with cancer or immune deficiencies.

Because of their extremely delicate nature, human T-cells are hard to use in research as they only survive for a few days once they’ve been extracted. “We were able to create a novel class of artificial T-cells that are capable of boosting a host’s immune system by actively interacting with immune cells through direct contact, activation, or releasing inflammatory or regulatory signals,” explains Mahdi Hasani-Sadrabadi, an assistant project scientist at UCLA Samueli. “We see this study’s findings as another tool to attack cancer cells and other carcinogens.”


When an infection enters the body, T-cells become activated and as a result, they begin to flow through the bloodstream to the affected areas. In order to get to where they need to, T cells have to squeeze through small pores and gaps. To do this T cells sometimes deform to as little as a quarter of their size normally. They also have the ability to expand to around three or four times bigger, helping to fight any antigens that are on the attack. 

Researchers Have a Breakthrough as They Develop Synthetic T Cells That are Almost an Exact Replica of Human T Cells
UCLA scientists developed artificial T cells that, like natural T cells, can deform to squeeze between tiny gaps in the body, as shown in this schematic.

It’s only very recently that bioengineers have been able to successfully replicate human T-cells this closely. The way the team achieved that is by fabricating the T cells using what’s known as a microfluidic system. Combining mineral oil with an alginate biopolymer and water, the researchers created alginate microparticles that replicated the same structure and form as human T cells. Microparticles were then gathered from a calcium ion bath to have their elasticity altered by changing the calcium ion concentration in the bath.


To make the synthetic T-cells have the same infection fighting, tissue penetrating, inflammation regulating properties as natural T-cells, the researchers had to adjust the cells’ biological attributes. The way they did that was by coating the T-cells with phospholipids. They then used a process called bioconjugation to link up the T cells with CD4 signalers, the particles responsible for activating natural T-cells to attack cancer cells or infections.

More News to Read

How You Can Make Treasure From Trash

Have you ever heard the saying, “one man’s trash is another man’s treasure? There are many instances where one person is quite happy to throw something away only for someone else to say, “I could have used that”. And while that may not be true in every circumstance, it certainly rings true in the world of composting.


A new report published in ACS Omega explains how by collecting the gases produced while making compost and mixing it with rubber can make optimized electronic sensors and sealants. Rubber comes from the Hevea brasiliensis tree and is used in a number of different applications due to its durability, elasticity, and flexibility. But to be able to use natural rubber in everyday items such as tires and rainboots, a lengthy process has to be gone through first.

During the manufacturing process, other elements are added to the polymer to make it usable. Carbon black is one of those elements. Carbon black is a filler that’s used to enhance the properties of rubber. But because it’s needed in such vast amounts it affects some properties of it too, like the color. And for that reason, scientists have been looking for a replacement for carbon black. 


One such alternative being explored at the moment is to use graphitic nanocarbons as fillers as these would be far more cost-effective and more consistent in size than others in production. The researchers used graphitic nanocarbons that had been extracted from methane produced by compost. Combing these nanocarbons with natural rubber, a new composite was formed.

Upon testing, the researchers found that the composite only became conductive when plied with 10 wt percent of the nanocarbons, making it an idea to be used in the development of sensors and to be used as a sealant for electrical devices. In conclusion, the team is happy to say that nanocarbons are definitely a viable option as a replacement to carbon black.


More News to Read

New Laser System Upgrade Allows Scientists to Explore Fusion Energy and Plasma Physics Like Never Before 

Despite extensive research carried out over the past few years, plasma physics is still a subject of great mystery. However, thanks to a new laser system upgrade completed by scientists at the Department of Energy’s SLAC National Accelerator Laboratory, physicists can now triple the amount of energy in the Matter in Extreme Conditions (MEC) instrument. In doing so it allows them to create the extreme high-pressure conditions needed to really explore these areas.

The MEC instrument is made up of optical and x-ray lasers that work together to create and capture extreme temperature and pressure conditions in materials. Using the same technology, researchers have also looked into how meteor impacts shocking the minerals of the Earth’s crust can turn aluminum foil into plasma.


Having received funding from the Office of Fusion Energy Services (FES) the team in charge of the MEC instrument were able to double the amount of energy produced by the optical beam in 10 nanoseconds. “The team exceeded our expectations, and exciting accomplishment for the DOE High Energy Density program and future MEC instrument users,” remarks Kramer Akli, program manager for High Energy Density Laboratory Plasma at FES. 

Some of the other laser system upgrades included a new front-end and the addition of an automated system to help shape the laser pulses with extreme precision. Doing so will enable users to have much greater control over the shape of the pulse they’d like to use in their experiments. A more intense and reliable laser allows researchers to study fusion energy in conditions that are more relevant. 

There are many different researchers across the world taking advantage of the MEC upgrade, including mechanical engineering doctoral student, Shaughnessy Brennan Brown. “The MEC upgrade at LCLS enables researchers like me to generate exciting, previously-unexplored regimes of exotic matter – such as those found on Mars, our next planetary stepping stone – with crucial reliability and repeatability,” she says. 


The role of the optical laser is to amplify a low-power beam gradually until it reaches an extremely high energy level. However, as the laser beam is amplified it becomes much poorer quality and harder to control, making it pretty unreliable to use in test conditions. “The initial low energy pulse must have a pristine spatial mode and the properly configured temporal shape – that is, a precise sculpting of the pulse’s power as a function of time – before amplification to produce the laser pulse characteristics needed to enable each users’ experiment,” says the MEC Laser Area Manager, Michael Greenberg. 

Before the upgrade, any adjustments to achieve the right energy and pulse shape for each target would have had to been done manually, which was very time-consuming. Now, with thanks to MEC laser scientist, Eric Cunningham, the team have an automated system to use that can shape the low-powered beam before it gets amplified. “The new system allows for precise tailoring of the pulse shape using a computerized feedback loop system that analyzes the pulses and automatically re-calibrates the laser,” says Cunningham. As well as improving pulse shapes, it’s also more consistent at depositing energy on samples, and as a result, both efficiency and quality are vastly improved.


Computer Scientists Create New Algorithm That’s Exponentially Faster Than Any We’ve Seen Before

The use of algorithms in technology is nothing new. There are algorithms around that help us avoid traffic and there are algorithms around that can identify new drug molecules. And while these algorithms are already pretty powerful in their own right, what if they were to perform even faster?

That’s what computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have been working on intently for the past few months. And in their research, they’ve managed to develop a whole new kind of algorithm that can speed up computation exponentially by significantly reducing the number of steps it takes in which to reach the desired solution. 


A lot of optimization problems rely on the use of age-old algorithms in which to work. These algorithms use step-by-step approaches that are sized in proportion to the amount of data they’re dealing with. The problem is that this leads to a bottleneck effect where more questions are raised in areas of research that are simply too expensive to explore.

“These optimization problems have a diminishing returns property,” says the senior author of the study and Assistant Professor of Computer Science at SEAS, Yaron Singer. “As an algorithm progresses, its relative gain from each step becomes smaller and smaller. This algorithm and general approach allow us to dramatically speed up computation for an enormously large class of problems across many different fields, including computer vision, information retrieval, network analysis, computational biology, auction design, and many others.”

What would have previously taken months to compute can now be done in a matter of seconds thanks to the new algorithm? “This new algorithmic work and the corresponding analysis opens the doors to new large-scale parallelization strategies that have much larger speedups than what has ever been possible before,” commented Jeff Bilmes, Professor in the Department of Electrical Engineering at the University of Washington. 

Traditional algorithms used to fix optimization issues work by narrowing down the search space until the best solution is found. The new algorithm takes a different approach. It samples a number of possibilities at the same time, then chooses the one that’s most likely to lead to the answer. “The strength of our algorithm is that in addition to adding data, it also selectively prunes data that will be ignored in future steps,” explains Eric Balkanski, a graduate student at SEAS and co-author of the paper.


As part of the research, Singer and Balkanski conducted experiments to show just how good their algorithm was. They showed it being able to sort through a pretty large data set containing 1 million ratings on various movies taken from more than 6,000 users and come up with a personalized recommendation list 20 times faster than anything else around. The algorithm was also tested on a taxi dispatch problem which looked to allocate the right taxis to the right customers, in the most efficient way. It proved to be 6 times faster than any other known solution.

Balkanski is confident that this figure would increase even more when applied to larger scale applications such as social media analytics or sponsored search auctions. The algorithm could potentially be used in other areas such as in the designing of better drugs to treat illnesses such as diabetes, Alzheimer’s, HIV, multiple sclerosis, and hepatitis C. Or, in designing advanced sensor arrays for medical imaging.

“This research is a real breakthrough for large-scale discrete optimization,” says Andreas Krause, professor of Computer Science at ETH Zurich. “One of the biggest challenges in machine learning is finding good, representative subsets of data from large collections of images or videos to train machine learning models. This research could identify those subsets quickly and have the substantial practical impact on these large-scale data summarization problems.” 


More News to Read

Engineers Use Graphene to Create New Revolutionary Photodetector 

For quite some time now, graphene has been one of the most versatile materials known to man. It’s used in solar cells, drug delivery, and biosensors among other things. And now, thanks to a group of engineers at the UCLA Samueli School of Engineering it’s been used to develop a new kind of photodetector that could vastly improve thermal sensing, night vision, and medical imaging. 


Photodetectors are sensors of light that are found in cameras and other imaging devices. They create images by sensing the patterns of photons. There are different photodetectors that sense different parts of the light spectrum. For example, those used in night vision goggles work by sensing thermal radion undetectable to the naked eye. Others are used to identify chemicals within the environment. These work by detecting how they reflect light.

Their operating speed, sensitivity levels to lower light, and how much of the spectrum they’re able to sense are the three things that largely decide how useful and versatile the photodetectors are. Traditionally when engineers made improvements to one of these areas at least one of the other areas would diminish as a result. But, this new photodetector developed by the UCLA engineers has significantly improved in all three areas. 


The photodetector operates across a broad range of light, processes images more quickly and is more sensitive to low levels of light than current technology.

“Our photodetector could extend the scope and potential uses of photodetectors in imaging and sensing systems,” says Mona Jarrahi, lead author of the study and a professor of electrical and computer engineering. “It could dramatically improve thermal imaging in night vision or in medical diagnosis applications where subtle differences in temperatures can give doctors a lot of information on their patients. It could also be used in environmental sensing technologies to more accurately identify the concentration of pollutants.” 

In order to develop the photodetector, the engineers first have to lay strips of graphene over a silicon diode layer which sat on a base layer of silicon. They then had to create a number of comb-like nanoscale patterns made using gold. The graphene was used as a kind of net in which to catch any incoming photons and transform them into an electrical signal. While the gold nanopatterns were used to transfer that information quickly to a processor which then produces a high-quality image as a result. 


More News to Read

Researchers Develop New Spectral Invisibility Cloak Like Nothing We’ve Seen Before  

The idea of a real cloaking device that can hide objects by simply manipulating how light interacts with them has been at the forefront of researchers and engineers’ minds for quite some time. And now, thanks to a new study carried out by researchers at the National Institute of Scientific Research (INRS) in Montreal, Canada that idea could become reality sooner than they ever imagined. 

This study demonstrates the use of a cloaking device based on the manipulation of the frequency of light waves as they travel through an object which is a completely new approach to the other cloaking technologies in existence at the moment. Researchers say it could be used to secure data that’s transmitted over fiber optic lines as well as be used to improve technologies used for telecommunications, sensing, and information processing. 


Using this concept, the researchers say that theoretically, they could make 3D objects invisible from every direction. Most cloaking devices around at the moment can only fully hide an object when it’s illuminated with just a single color of light. The problem with that is that both sunlight and most other artificial light sources are broadband. This means that they contain several different colors within them. 

The spectral invisibility cloak is designed to hide objects even under broadband illumination. The way it works is by selectively transferring energy from some colors of the wave to others. Once the wave has passed through the object the device transforms the light back to its original state. 

“Our work represents a breakthrough in the quest for invisibility cloaking,” says Jose Azana of the INRS. “We have made a target object fully invisible to observation under realistic broadband illumination by propagating the illumination wave through the object with no detectable distortion, exactly as if the object and cloak were not present.”

When you look at an object, what you are really seeing is the way in which it manipulates the light waves that are interacting with the object. Most invisibility cloaking solutions involve changing the path that light travels so that waves move around an object opposed to through it. Other methods (called temporal cloaking) work by changing the propagation speed of light so that the object becomes concealed temporarily as it moves through the light beam for a certain amount of time. 

“Conventional cloaking solutions rely on altering the propagation path of the illumination around the object to be concealed; this way, different colors take different amounts of time to traverse the cloak, resulting in easily detectable distortion that gives away the presence of the cloak,” explains Luis Romero Cortes of the INRS. “Our proposed solution avoids this problem by allowing the wave to propagate through the target object, rather than around it, while still avoiding any interaction between the wave and the object.”

A broadband wave illuminates an object, which reflects green light in the shown example, making the object detectable by an observer monitoring the wave. A spectral invisibility cloak transforms the blocked color (green) into other colors of the wave’s spectrum. The wave propagates unaltered through the object, without ‘seeing its color’ and the cloak subsequently reverses the previous transformation, making the object invisible to the observer.
Credit: Luis Romero Cortés and José Azaña, Institut National de la Recherche Scientifique


The way in which Azana and his team did this was by developing a technique that could rearrange different colors of broadband light so that the light wave could propagate through the object without actually making it visible to the naked eye. It works by first moving the colors closer to parts of the spectrum that won’t be affected by light propagating through it.

For example, if the object was reflecting a blue light, then light in the blue part of the spectrum may change to green so there would be no blue light for it to reflect. Once the wave passes through the object, it the cloaking device reverses the change, returning the wave back to its original state.

The cloaking device was made using two pairs of readily available electro-optical parts: a dispersive optical filter and a temporal phase modulator. While the first pair was placed in front of the optical filter, the other was placed behind it. The experiment successfully demonstrated how the device could transform light waves that would have normally been absorbed by the optical filter, then flip the process as the light made its way out the other side, making it look as though the laser had managed to propagate through a non-absorbing medium.

While there is still some work to be done before the invisibility cloak is ready for commercial or military use, it is certainly something that could someday be used in a wide range of security measures. As well as cloaking, the overall concept of reversing and redistributing energy could be used in numerous applications. The removal and reinstatement of colors in broadband waves could allow more data to be transmitted over any one link, helping to remove bottleneck and logjam issues as demand increases.

The next move for the researchers is to try and extend the concept to make an object appear invisible when illuminated from every direction. As they continue to work towards this goal they will also look to advance practical applications for one direction spectral cloaking in 1D wave systems such as applications based on fiber optics.


More News to Read

Scientists Discover Nearly 80 Exoplanet Candidates in NASA’s K2 Mission

K2 is the given to the follow-up mission to NASA’s Kepler Space Telescope. Upon analyzing data retrieved from the mission, scientists discovered nearly 80 new exoplanet candidates. One that stood out, in particular, is a potential planet that orbits a star known as HD 73344. And if confirmed, would be the brightest planet host yet to have been detected by the K2 mission. 

It’s estimated that the planet is around 2.5 times that of the Earth and takes around 15 days for it to orbit HD 73344. It’s also a very hot planet with average temperatures ranging from 1,200 to 1,300 Celsius (approx. 2,000 degrees Fahrenheit). In terms of location, it sits around 114 light years from our own planet. This is close enough for scientists to see it as the perfect candidate for further studies in which to determine its atmospheric composition among other characteristics. 


Ian Crossfield is an assistant professor of physics at MIT and co-leader of the study. He believes it’s probably more likely a smaller, hotter Neptune or Uranus. As well as the significant number of exoplanets discovered, the analysis has also been praised for the amount of time in which it took to complete. Using existing tools developed at MIT, the researchers were able to quickly identify the new exoplanet candidates. And just weeks after the completion of the mission, that information was made public. Usually, that kind of analysis takes months, if not years to complete.

Using such a fast planet-search allows astronomers to follow-up their findings with ground-based telescopes much faster than normal. This gives them a better chance of seeing the planetary candidates before the Earth moves on past. “When the TESS data come down, there’ll be a few months before all of the stars that TESS looked at for that month ‘set’ for the year,” says Crossfield. “If we get candidates out quickly to the community, everyone can start immediately observing systems discovered by TESS, and do a lot of great planetary science.”


In each campaign K2 takes part in, one patch of the sky is observed for a total of 80 days. The campaigns analyzed by the team were C16 and C17, K2’s 16th and 17th observation campaigns. These campaigns are both forward-facing, meaning that K2 observed those stars that sat in front of the telescope and were within the Earth’s field of view. Crossfield, co-author of the study Liang Yu, and others seized this moment to accelerate the usual analysis of K2 data.

During the C16 campaign, 20,647 stars were observed by K2 between Dec 7, 2017, and Feb 25, 2018. Just three days later, on Feb 28, the data was released to all within the astronomy community. Using algorithms, Yu and Crossfield whittled down the 20,000+ stars to just 1,000 that were of real interest. Then, after further analysis, they managed to pick out just 30 planet candidates that were of the highest quality and whose periodic signatures were most likely caused by transiting planets.

A similar number of planetary candidates were also identified in the C17 analysis. As well as these, the researchers also identified hundreds of potential signatures of various astrophysical phenomena such as pulsating stars or supernovae. And while stars don’t typically change too much over the course of just one year, Crossfield says when it comes to doing a follow-up visit the sooner the better. “You want to observe [candidates] again relatively soon so you don’t lose the transit altogether,” he says. “You might be able to say, ‘I know there’s a planet around that star, but I’m no longer at all certain when the transits will happen.’ That’s another motivation for following these things up more quickly.”


Since the publishing of the team’s results, four candidates have been confirmed as definitely being exoplanets. The researchers have also been studying other potential candidates, including the possible planet found to be orbiting HD 73344. Crossfield says the speed in which this planetary candidate was detected along with the brightness of the star, will make it much easier for astronomers to hone in on even more specific characteristics of the system. 

“We found one of the most exciting planets that K2 has found in its entire mission, and we did it more rapidly than any effort has done before,” says Crossfield. “This is showing the path forward for how the TESS mission is going to do the same thing in spades, all over the entire sky, for the next several years.” 

More News to Read

A Breakthrough in Gene Editing Sees Researchers Cure Blood Disorder

A new technique carried out by a group of several Yale researchers has for the first time ever, managed to use gene editing to correct a mutation within a fetus that causes a certain kind of anemia. It was described in a recent paper published in the journal Nature Communications.

The technique involves injecting a mix of nanoparticles containing both donor DNA and peptide nucleic acids (PNAs). PNAs are synthetic molecules that mimic DNA. Binding to their target gene they form a triple helix, which is used to repair the mutation. In this particular study, the nanoparticles were injected into the mice fetuses. Results showed that at just four months after birth, the mice were cured of thalassemia, an inherited blood disorder.


Image of nanoparticles accumulating in the liver of a fetal mouse.
Image of nanoparticles accumulating in the liver of a fetal mouse.

Peter M. Glazer, MD, professor of therapeutic radiology and of genetics, is the man responsible for developing the technique which sees PNA and DNA combine in order to repair gene mutations. Here’s what he had to say on the breakthrough: “The treated mice had normal blood counts, their spleens returned to normal size, and they lived a normal lifespan – whereas, the untreated ones died much earlier. So, we have a long-term survival benefit, which is pretty dramatic.”

The researchers say that using nanoparticles to deliver the PNAs is essential. When injected intravenously on its own, PNAs take just 30 minutes or so to clear from circulation. When injected with nanoparticles they are present for far longer. The man responsible for making the nanoparticles is Mark Saltzman, Yale’s Goizueta Foundation Professor of Biomedical Engineering, Chemical and Environmental Engineering, and professor of physiology. He designed them using a degradable polymer and made so small that they could easily accumulate in the liver of the fetus.

“People who have thalassemia, they get sicker and sicker as they go on because they don’t have normal red blood cell function, and it gets harder to treat,” says Saltzman. “Here, we’re correcting the gene very early in development, so you see more benefits because they don’t get sick.” While most other gene-editing procedures are limited to working with cells in a petri dish, this technique uses live animals in which to get results, making it more precise. 


Distribution of nanoparticles in a litter of fetal mice after intravenous nanoparticle treatment. The intense green, yellow, and red areas show higher concentrations. The highest accumulation of nanoparticles in each mouse is in the fetal liver.

One of the main challenges of this study was making sure the therapy worked after just one injection as multiple injections increase the risk of harm to the fetus. This is where David H. Stitelman, MD, came in very useful as an assistant professor of pediatric surgery with an expertise in the accessing of fetal stem cells. “You have to catch these cells while they are in a state of massive proliferation, so this is literally a once-in-a-lifetime opportunity,” he says. 

The team is now in the process of seeing how their technique can be used to treat other genetic, single-gene disorders including sickle cell and cystic fibrosis. “If a baby could be born with a lower burden of disease – or no disease whatsoever – that would have a profound impact on that child’s life as well as the family,” says the study’s first author, Adele Ricciardi, a current MD/ Ph.D. student. The number of people this could help is enormous. Now that scientists know it’s possible, they just need to make it happen.


More News to Read

New Frequency Hopping Transmitter Defies Even the Fastest Hackers

There are already more than 8 billion devices dotted around the world, all connected to what we call the “internet of things”. This includes vehicles, medical devices, wearables, and all other kinds of smart technologies. It’s estimated that by 2020, more than 20 billion devices will be connected together, all sharing and uploading data online. 

The problem is that those devices are all very vulnerable to hackers who can jam signals, overwrite data, and just generally, be a nuisance. One way people can protect their data is through a method known as “frequency hopping.” Frequency hopping is the process of sending each data packet via a unique radio frequency (RF) channel so that no individual packet can be pinned down. And while frequency hopping is good, hopping large packets still take too long and those with fast hacking skills are still getting in. 


To combat this issue MIT researchers have now developed a new kind of hopping transmitter that’s quick enough to stop even the fastest of hackers. The way it works is by leveraging frequency-agile devices known as bulk acoustic wave (BAW) resonators to quickly flip between different RF channels sending bits of information each time. As well as those researchers built in a channel generator that randomly selected a different channel to send a bit to every microsecond. They also developed a wireless protocol in which to facilitate the fast frequency hopping. 

“With the current existing [transmitter] architecture, you wouldn’t be able to hop data bits at that speed with low power,” explains Rabia Tugce Yazicigil, a postdoc in the Department of Electrical Engineering and Computer Science and first author on the paper. “By developing this protocol and radio frequency architecture together, we offer physical-layer security for connectivity of everything. More seriously, perhaps, the transmitter could help secure medical devices, such as insulin pumps and pacemakers, that could be attacked if a hacker wants to harm someone.”

New Frequency Hopping Transmitter Defies Even the Fastest Hackers
MIT researchers developed a transmitter that frequency hops data bits ultrafast to prevent signal jamming on wireless devices. The transmitter’s design (pictured) features bulk acoustic wave resonators (side boxes) that rapidly switch between radio frequency channels, sending data bits with each hop. A channel generator (top box) each microsecond selects the random channels to send bits. Two transmitters work in alternating paths (center boxes), so one receives channel selection, while the other sends data, to ensure ultrafast speeds. Courtesy of the researchers

One way in which hackers attack wireless devices is through a process called selective jamming. This is where a hacker manages to intercept and corrupt data packets being transmitted from a single device yet leave all others untouched. These attacks are often hard to detect and are sometimes mistaken for poor wireless links. For that reason, they’re hard to catch with existing frequency hopping transmitters. 


With frequency hopping, data is sent across various channels based on a sequence that’s been predetermined with the receiver. Packet-level frequency hopping works slightly different in that it sends one data packet at a time across a range of 80 channels, on a just a sole 1-megahertz channel. The whole process takes around 612 microseconds to complete, and the problem is that hackers can locate the channel in the first microsecond and send a command to jam the packet. 

“Because the packet stays in the channel for a long time, and the attacker only needs a microsecond to identify the frequency, the attacker has enough time to overwrite the data in the remainder of the packet,” says Yazicigil. 

In order to build their ultrafast frequency-hopping method, the researchers had to replace the crystal oscillator with one that was based on a BAW resonator. The problem with that is BAW resonators fall short of the 80-megahertz that’s available for wireless communication in the 2.4-gigahertz band, and can only cover a range of around 4 or 5 megahertz of frequency channels. 

To get around that issue, they incorporated components that split the input frequency into several different frequencies. Then a mixer combines these divided frequencies with the BAW frequencies to get a whole new host of radio frequencies capable of spanning around 80 channels. The next step involved randomizing the way in which data was sent. To do this the researchers used a system where each microsecond generated a pair of separate channels.

Loaded with a secret key that’s pre-shared with the transmitter, the receiver gets to work navigating a 1 bit down one channel and a 0 bit down the other. However, the channel carrying the desired bit is always going to be the one that’s displaying more energy. When the receiver compares the two channel’s energies it takes note of which one is higher and proceeds to decide the bit sent on that channel. The channel selection is both fast and random. And, because there’s no fixed frequency offset, there’s no way to tell which bit is going down which channel making selective jamming no better than a random guess. 


The team’s final innovation involved the integration of two transmitter paths into a more efficient form of architecture. In doing so, they enabled the transmitter to receive the next selected channel while still sending data on the existing channel. The workload then alternates. This ensures there’s a 1-microsecond frequency-hop rate and also preserves the same 1-megabyte-per-second data rate that applies to most BLE-type transmitters.

More News to Read

Dark Matter of the Human Genome Offers Insight as to How the Androgen Receptor Impacts Prostate Cancer 

A new gene has been identified by researchers at the University of Michigan Rogel Cancer Center that’s responsible for keeping control over signals released from a key player in prostate cancer, the androgen receptor. The study, which was published in Nature Genetics, revealed that by knocking down this gene, named ARLNC1, they could deplete the cancer cells in mice. This suggests the long-coding RNA (lncRNA) may be a potential target for future therapies.


The way current prostate cancer treatments work is by blocking the androgen receptor as this will stop cancer growth. The problem is that many patients end up growing a resistance to this kind of therapy and go on to develop metastatic castration-resistant prostate cancer as a result. “The androgen receptor is an important target in prostate cancer. Understanding that target is important,” says the senior author of the study and director of the Michigan Center for Translational Pathology, Arul Chinnaiyan, M.D., Ph.D. 

In 2015, Chinnaiyan’s lab identified a large number of lncRNAs. These are often referred to as the dark matter of the genome as we know so very little about them. But in searching for lncRNAs that play a role in prostate cancer, the researchers found that ARLNC1 levels are higher in prostate cancer relative to prostate tissue which is benign, hence the suggestion that it plays a role in the development of cancer. And the fact that it was linked with androgen receptor signaling made it all the more exciting.


The study revealed that the androgen receptor actually induces the expression of ARLNC1 which then binds to the receptor’s messenger RNA transcript. This stabilizes the receptor’s level which is then fed back to ARLNC1. “At the end of the day, you’re creating or stabilizing more androgen receptor signaling in general and driving this oncogenic pathway forward. We’re envisioning a potential therapy against ARLNC1 in combination with therapy to block the androgen receptor – which would hit the target and also the positive feedback loop,” says Chinnaiyan.

When the cell lines expressing the androgen receptor blocked ARLNC1, the cancer cells died and tumor growth was prevented. Raising the ARLNC1 levels in mice caused large tumors to form while depleting the ARLNC1 caused the tumors to shrink. Moving forward the researchers are planning on studying the biology of ARLNC1 to see how its linked to androgen receptor signaling and prostate cancer progression.


More News to Read