Tuesday, November 26, 2024
11.8 C
New York
Home Blog Page 32

Four Quick Ideas to Guide the Design of Your App

Marketing and engaging with customers have changed dramatically from the traditional television, radio and newspaper print, it has even changed dramatically from the traditional website. Now there are social media platforms and native apps that are on offer as another way of connecting and keeping your customers.

The value of having an app as part of your digital and online presence is an excellent addition an asset to your business. You need to know a thing or two about them first, and even if you aren’t the expert, it is good to educate yourself a little so that when you’re engaging with the agency or IT team who will be doing the work, you will know what they are talking about and can offer opinions and suggestions. Here are four quick ideas to guide your app design process.


Finding a development team

Use Google to search for a phrase like ‘mobile app development Sydney‘ to find app development agents or companies that can provide you with the skills and services you need to meet your needs. You will then want to ensure that they understand your business and have good insights into what you want to and need to achieve through the app. There are different forms of mobile devices now.

These include the original cell phone or mobile, but there are also tablets, smartwatches and other devices that allow for mobility. People use mobile devices in several ways, and you need to understand what this is. It ranges from being a completely social device where personal stuff is done on it, it can also be completed in the other direction where it is an extension of your office. 

Consider which devices will be used

People use different devices and different operating systems and you need to know which one is which. There are differences in how they work and how your app needs to be engineered to be compatible.

If you are wanting to target only one specific operating system, such as android then make find a company that specialises in android app development Sydney has plenty of potential partners who will be able to support your development needs. There are also different types of phones within each operating system and different levels of phones.


Knowing whether to go native or not

There is a movement away from native apps to web-based apps. The debate goes along the lines of saying that a native application, in other words, one that you have to download onto your phone through an app store, requires data to download it. It then also sits on your device taking up space and clogging up your storage and operating capacity.

It asks for software updates and if these aren’t don’t then the app will have glitches and function errors. But, if you have a web-based app, the updates are immediate and don’t require downloads, the app doesn’t need to be downloaded onto your phone and therefore won’t take up unnecessary space and you don’t need to pay for the date to download it in the first place.

Photo by Rami Al-zayat

How To Tell Whether Your Laptop Is New, Refurbished or A Replacement

If you are buying a new laptop directly from the manufacturer or a reputed electronics store then you can be safe in the knowledge that the product is brand new, sealed and never used before. If however, you buy a ‘new’ laptop from a vendor which doesn’t have such a reputation, you may well find that the laptop is not factory new, but rather a refurbished model or an open-box replacement. There is nothing wrong with either of these options of course, but it is still important that you know what kind of laptop you have purchased, as each comes with their own pros and cons. 


Refurbed and Replacements 

Refurbished laptops are laptops which have been purchased brand new and then either returned as faulty or as a result of credit not being paid or something similar. These laptops are sent back to the manufacturer where they are given a full service and diagnostics check, they are repaired using original parts and then rigorously tested so that they pass the same quality control as a new item.

Replacement or open-box laptops are those which have been bought, opened, and then returned by the customer. Refurbished laptops have an unknown lifespan but they are significantly cheaper than a newer model. Replacement laptops should work just as good as new, but they should be sold at a lower cost given they have already been opened and handled. 

Telling the Difference

What should happen here is that a refurbished laptop will have a sticker on the box which clearly marks it as refurbished. Another telltale sign of refurbished laptop if there is no sticker is that at the end of the product number it will have an additional R at the end, indicating that it has been refurbished. In 95% of cases, the laptop you find on sale will have one of these identifiers. 

Price

The price will also be a key indicator here, if you find a ‘new’ Dell laptop in a store on sale for $399 when the brand new price is still $599 then you should approach with caution, especially if it is not shown that it is either an open-box or a refurb. The price is a great way of deciding whether or not the product is, in fact, the real deal. 


Usability and Telltale Signs

Ultimately if a laptop is not marked as being either refurbished or a replacement, it will be hard to tell whether or not you have a brand new product or not. You can keep an eye out for signs, if there are dead pixels on the screen or the boot is slow, alarm bells should start ringing. Sometimes you may even see suspicious stickers if the box says something like ‘designed for Windows XP’ but it is, in fact, running Windows 10, it is likely that the product has been refurbed. 

Keep your eyes out for any of these signs to identify what kind of laptop you have bought. 

Photo by Daniel Korpai

Is the Concept of a Quantum Internet Just Around the Corner?

With technology advancing at rapid speeds and researchers working day and night to solve some of the world’s greatest physics mysteries, is the whole idea of a quantum internet close that we think?

Adapting to a world of quantum computing may sound far-fetched, but it’s really not that far away and the benefits that it will bring once introduced, integrated, and accepted, will be exponential. Communications will be more secure, sensors will be more precise, and a higher level of computation than we’ve ever experienced will be available as standard. 


The idea that quantum computers will act as nodes in a whole network of quantum devices has also been envisioned over the past few years. In this kind of system, connections would be made via quantum channels that would allow data to flow through it, effectively creating the foundations for the first-ever quantum internet.

But that is definitely easier said than done. One of the biggest challenges to deal with is how to sort the network’s quantum data according to the state they were prepared. To help overcome this problem, a group of researchers from the Universität Autonoma De Barcelona developed a procedure that’s capable of identifying clusters of quantum systems that are identically prepared.

Schedule of the quantum data classification protocol CREDIT UAB

This protocol illustrates a natural connection that occurs depending on what common attributes they share and can be compared to that of how a classical computer distinguishes different sounds captured. A computer can quite capably differentiate a conversation, street musician, or traffic in the street, and quantum computers work in a similar fashion, but on a much deeper level. 


The UAB physicists compared the performance of both classical and quantum computer protocols in this area. Results from the study showed that the quantum protocol was far more efficient than the classical strategies, especially when it comes to deciphering large dimensional data. 

So, it’s another leap forward in terms of developing a future quantum internet. For more information, you can see the research published in the Physical Review X journal.       

Photo by NASA

How Are Memories Really Made?

It’s easy to take the human body for granted. But, it’s an amazing vessel. And, the human mind is an incredible part of that. There are so many different things going on in the brain at any one time. So how on Earth does it manage to translate any new information perceived from the outside world and transform it into something we call memory?

The Human Brain Project is a collaboration of international researchers all working towards this same goal and has just recently published a study on this subject in the PLOS Computational Biology Journal. It centers in on a part of the brain that involves behavior, memory, and reward learning – the neuronal circuits in the striatum. Results from the study give scientists a better understanding of the nervous system and the way in which it learns and adapts to change. 


Within the brain are neuronal circuits that all interconnect with one another via synapses. Every time one of these synapses is modified, it has an impact on the way we react to certain stimuli or remember things. Synaptic plasticity occurs when certain synapses become stronger or weaker over time due to a response to the neural activity it experiences and is one way these neuronal circuits are modified. By taking a closer look at the underlying biochemical reactions involved in synaptic modifications, scientists have discovered a great deal more about the mechanism of plasticity.

Learning about plasticity mechanisms is essential in understanding how things such as learning and the formation of memories occur within the brain. Synaptic plasticity is determined through the information that’s processed through synaptic signal transduction networks. And, on occasion, even single molecules can realize these networks computational capabilities. 

“Our work provides a significant step towards understanding what we can call “molecular recognition” of these AC proteins, based on which neurons can control with astonishing precision and fidelity the speed of AC’s catalyzed reaction. This, in turn, activates subsequent downstream processes essential for neuronal function.”


There are a total of 9 membrane-bound AC variants expressed within the brain, with AC5 being most dominant in the striatum. During the process of reward learning, cAMP production is necessary for strengthening synapses which interconnect the cortical and striatal neurons. For this study, the team used a multiscale simulation approach in which to construct a kinetic model of the signaling system. “From this model, we could find out how AC5 can detect particular combinations of simultaneous changes in neuromodulatory signals which result in synergistic cAMP production,” said Rebecca Wade, leader of the study at the Heidelberg Institute for Theoretical Studies (HITS). 

Illustration: Tatiana Shepeleva /Shutterstock

Controlling the Transport and Spin Properties of Cold Atoms

Charge-neutral atoms (or cold atoms as you may know them) are kept at super low temperatures where their quantum properties become more apparent. One particular thing that these atoms are used for is in emulating the basic behavior of electrons. 

One team that’s been interested in this area explicitly over the past few years is one led by Tilman Esslinger at the Institute of Quantum Electronics in the Department of Physics of ETH Zurich. As part of their research, they developed a platform that took these cold atoms and fed them through 1D and 2D structures. In doing so they are able to study the quantized conductance of the atoms in more detail.


An optical beam (red) introduces an effect equivalent to applying a magnetic field inside an optically defined structure in which the atoms move (green). Atoms in the energetically lower spin state (orange) can flow while atoms in a higher spin state (blue) are blocked. CREDIT ETH Zurich/D-PHYS

This atomic spin filter developed by the ETH researchers is just as efficient as an equivalent system out there currently. And when paired with the controllability of the cold-atom platform, it enables a whole realm of exciting new possibilities for exploring quantum transport in more depth. 

Photo by Zoltan Tasi on Unsplash

Did the Universe’s Very First Stars Form Quicker Than We First Thought?

The new insight gained from an ancient gas cloud has revealed just how fast the Universe’s first stars formed. The 13-billion-year-old cloud has enabled astronomers from the Carnegie Institution for Science to record the earliest measurement to date of the formation of stars and other elements within the universe. Results from the study revealed that the first stars in the universe formed much earlier than we previously believed. 

When the Big Bang took place, the universe began as a hot mix of rapidly expanding energetic particles. As these particles cooled, they transformed into hydrogen gas. There were no light sources at this point. Eventually, gravity condensed the space matter and the first stars and galaxies were formed. 


Stars play an important part in the galaxy as they are responsible for synthesizing most of the elements we know. When the very first stars exploded all the elements contained within them were scattered around the universe. And from this, further generations of stars were created. 

An ancient gas cloud discovered by a team led including recent Carnegie-Princeton fellow Eduardo Bañados and Carnegie’s Michael Rauch and Tom Cooper formed just 850 million years after the Big Bang. Its chemical composition reveals that the first generation of stars formed quickly and rapidly and enriched the universe with the elements they synthesized.
CREDIT – The illustration is courtesy of the Max Planck Society

“Looking back in time far enough, one may expect cosmic gas clouds to show the tell-tale signature of the particular element ratios made by the first stars,” said Carnegie’s Michael Rauch, one of the authors on the study. “Peering even further back, we may ultimately witness the disappearance of most elements and the emergence of pristine gas.”

The discovery of the ancient gas cloud came when the researchers were using the Magellan telescopes located at Carnegie’s Las Campanas Observatory in Chile to follow up on some previously discovered distant quasars.  


A quasar is an extremely luminous object made up of huge black holes that accrete matter at the heart of huge galaxies. To get to us, the quasar has to shine through the gas clouds, allowing astronomers to get a better glance at their inner chemistry from the beginning of their journey. So far, results from the study have shown that in comparison to the stars, the cloud was actually quite modern. 

The team is hopeful that further gas clouds will be discovered in the future allowing us to learn even more about the primitive stars of the universe. 

Photo by NASA on Unsplash

Are Scientists Really Any Closer to Explaining the Origin of Black Hole Mergers?

While scientists may have detected and reported the emitting of gravitational waves from a total of 10 black hole mergers to date, they’re still trying to explain the origin of these events. The largest merger recorded to date has a bigger mass and higher spin than scientists ever imagined possible. 

To try and better understand how this merger could have occurred simulations have been created by a group of researchers and their findings published in Physical Review Letters. Results from the study suggest that these mergers take place within close proximity of supermassive black holes. In this region, called the accretion disk, lies gas, dust, stars, and black holes.  


This latest research suggests that as these black holes move around the accretion disk, they collide with one other, creating a black hole merger. As this merger continues to circle around the accretion disk it devours smaller black holes, adding to its size in what one of the researchers on the study, Assistant Professor at the Rochester Institute of Technology Richard O’Shaughnessy, describes as “Pac-Man-like” behavior. 

This is a simulation of an accretion disk surrounding a supermassive black hole. CREDIT Scott C. Noble

“This is a very tantalizing prospect for those of us who work in this field,” said O’Shaughnessy. “It offers a natural way to explain high mass, high spin binary black holes mergers and to produce binaries in parts of parameter space that the other models cannot populate. There is no way to get certain types of black holes out of these other formation channels.”

As researchers continue to search for gravitational waves, O’Shaughnessy and his colleagues are hoping to find further clues that will confirm their models. If their theory is proved correct it may help us better understand the assembly of galaxies on a whole.  


Collateral Sensitivity – Is it the answer to Developing New and Sustainable Antibiotic Treatments?

It’s no secret that antibiotic resistance is increasing around the world and for quite some time now researchers have been scrambling to find something that will overcome this dilemma. While various avenues have been explored, none have yet succeeded in achieving the required results. But, that could all be about to change.

New insight has been gathered from a study involving a type of multi-resistant bacterium called Pseudomonas aeruginosa, which is known to cause severe infections in humans. By observing this bacterium closer researchers have discovered that the strategy behind the evolution of its antibiotic resistance could be used to develop new and sustainable antibiotic therapies.  


The first author of the study is Camilo Barbosa, a former postdoctoral student at the Kiel Evolution Center (KEC) of Kiel University in Germany. “Antibiotic resistance is one of the most serious threats to public health worldwide,” he says. “The World Health Organization warns of a post-antibiotic era in which infections can no longer be treated and could become one of the most frequent non-natural causes of death”. 

With antibiotic resistance evolving so fast, the effect of antibiotic treatments also declines fast, and within a short space of time, become ineffective altogether. What this means is that new strategies need to be developed rapidly to counteract these detrimental effects. And in order to succeed, they need to take all the “relevant evolutionary processes into account,” explains Barbosa.   


Evolved P. aeruginosa streaked out on an Agar plate, highlighting variation in evolved resistance Barbosa et al.

Collateral sensitivity occurs when resistance to bacteria evolves at the same time an increased sensitivity to a different drug is developed. “While a variety of distinct cases of collateral sensitivities have previously been described, it was still unclear whether they could be exploited for antibiotic treatment,” says Barbosa. “We tested one key requirement of this principle for medical implementation: stability of the evolutionary trade-off.”

From their experiments, the researchers found that P. aeruginosa produces some quite different distinct cases of evolved collateral sensitivities when responding to different drugs. Some of these were found to remain stable over time which either led to an increased extinction or just a complete absence of the multidrug resistance evolution. They also found that the way in which the effectiveness of the drugs was determined was by looking at the order in which they were used, the ingrained genetic mechanisms and the evolutionary sacrifices the bacteria undergo while evolving antibiotic resistance.  

In this particular instance, the bacteria were unable to adapt when faced with the administration of antibiotics and soon became extinct. “The effects of changing certain drug classes and the impact of evolutionary costs on the development of resistance demonstrate the enormous potential of evolutionary principles for the design of new, sustainable antibiotic therapies,” comments senior author Hinrich Schulenburg, Professor in Zoology at the KEC. The next step for the professor and his colleagues is to further develop these strategies so that in the future, they could potentially be used inpatient treatment. 

Photo by Drew Hays on Unsplash

Researchers Develop ‘Tremor Trackers’ to Help Parkinson’s Sufferers

Parkinson’s Disease is a progressive neurological condition that affects the nervous system, with one of the most notable symptoms being tremors. The current way in which neurologists measure the severity of these tremors is by using the Unified Parkinson’s Disease Rating Scale (UPDRS). This scale evaluates a patient’s condition upon performing a number of given tasks. 

The problem with this kind of ‘quick’ evaluation is that it only gives the neurologist a small glimpse of what everyday life is really like for the patient. To really be able to manage and treat a Parkinson’s Disease patient’s tremors there needs to be a way to monitor them continuously as they go about their daily routine. 


Which is where the team of researchers from Florida Atlantic University’s (FAU) College of Engineering and Computer Science, a team from Rochester Medical Center and another from the Icahn School of Medicine at Mount Sinai come in. Together they have developed complex algorithms that when paired with wearable sensors, can continuously monitor and estimate the total number of tremors a Parkinson patient suffers in any given period. And unlike the UPDRS scenario, these readings can be obtained in real-life while the patient is in their natural environment. 

The study involved the use of two machine-learning algorithms (LSTM-based deep learning and gradient tree boosting) in which to estimate tremor severity in the patient. Two sensors were placed on the patient in which to collect data – one on the most affected wrist and the other on the most affected ankle. 

As the patient went about their day, performing usual tasks such as walking, eating, getting dressed, and resting, the sensors collected and collated the data. Results from the study showed that using the gradient tree boosting method, researchers were able to gather with high accuracy a total estimated tremor amount as well as the resting tremor amount. In the majority of cases, the results achieved were the same as those predicted using the UPDRS system. On the flip-side, the LTSM-based method was worse in terms of performance.


“This finding is important because our method is able to provide a better temporal resolution to estimate tremors to provide a measure of the full spectrum of tremor changes over time,” said Behnaz Ghoraani, Ph.D., senior author of the paper, assistant professor in FAU’s Department of Computer and Electrical Engineering and Computer Science, and a fellow of FAU’s Institute for Sensing and Embedded Network Systems (I-SENSE) AND FAU’s Brain Institute (I-BRAIN).  

Photo by Alina Grubnyak on Unsplash

MIT Researchers Develop New Collison Avoidance System to Make Autonomous Systems Safer

The adoption of autonomous cars and systems has risen dramatically over the past few years. Each individual car or system will no doubt be tested quite thoroughly (and approved) for safety, but there are still areas of improvement which can be made across the board.

One such area that MIT researchers have been working hard to improve in the autonomous world is the ability for systems to better detect moving objects. The way in which they’ve achieved this is by developing a system that can detect if a moving object is around the corner. 

The new system works by sensing tiny changes to the shadows on the ground and could one day be used by autonomous cars to avoid a potential collision with whatever may be lurking around the corner. It’s a system that could also be adopted by hospital robots which navigate through hospital hallways to avoid hitting people when delivering medicines or supplies. 


In a recent paper published by the researchers, they explained how experiments involving an autonomous car and autonomous wheelchair were both completed successfully. In fact, when it came to sensing and stopping for oncoming vehicles, the newly developed system beat LIDAR by a fraction of a second!  

So, all-in-all, it seems like it’s a fantastic system that aid vehicles and robots in being safer by giving them an early warning sign that there is something approaching around the corner. This allows the vehicle or machine to slow down, readjust its positioning, and advance in a better manner in order to avoid a collision. 

Using computer-vision techniques, the system, aptly named “ShadowCam” detects any changes to the shadows on the ground. By measuring the changes in light intensity over time, the system is able to determine if something is getting closer or moving further away. This information is computed and then classified. If the computer detects there is an encroaching object it reacts accordingly.   

In an attempt to try and perfect the system to be used in autonomous vehicles, the researchers developed a system that uses image registration along with a new kind of visual odometry technique. Image registration is something that is often used in computer vision and what it does is it overlays multiple images in order to reveal any variations among them. Visual odometry is a technique that’s used for Mars Rovers. Essentially it is responsible for estimating the motion of a camera in real-time. 


One specific technique employed by the researchers is called “Direct Sparse Odometry”, or DSO for short). This system uses a 3D point cloud in which to plot features points in environments. A computer-vision pipeline is then used to select only those areas of interest, such as near a corner. As images of the selected area are taken the DSO method sets to work overlaying them from the robot’s viewpoint. 

The way in which an object is classified as a dynamic moving one is through signal amplification. Any pixels that may have shadows are boosted in color. This, in turn, reduces the signal-to-noise ratio, making weak signals from any shadow changes stand out even more. Once this signal reaches a certain threshold, ShadowCam automatically classes the image as dynamic. And depending on how strong that signal is, it determines whether the robot is instructed to slow down or stop.

Next on the horizon for the researchers is to develop the system further so that it works effectively both in and outdoors. They will also look at a way of speeding up the system’s shadow detection.