Tweeting a lot to gain popularity is inefficient

The imbalanced structure of Twitter, where some users have many followers and the large majority barely has several dozen followers, means that messages from the more influential have much more impact. Less popular users can compensate for this by increasing their activity and their tweets, but the outcome is costly and inefficient. This was confirmed by an analysis of the social network performed by researchers from the Technical University of Madrid.

Visualisation of the spreading of messages on Twitter (retweets network in green) on the followers network (grey). The nodes represent users and their size is proportional to the number of followers that they have. Red indicates users who have written original tweets and yellow indicates users who have retweeted them.
Visualisation of the spreading of messages on Twitter (retweets network in green) on the followers network (grey). The nodes represent users and their size is proportional to the number of followers that they have. Red indicates users who have written original tweets and yellow indicates users who have retweeted them.

Credit: Image adapted by A.J. Morales, R.M. Benito et al. – Social Networks

What can Twitter users do to increase their influence? To answer this question, a team of researchers at the Technical University of Madrid (UPM) has analysed thousands of conversations, applied a computational model and devised a measure that relates the effort spent to the influence gained by tweeters.

The results, published in the journal ‘Social Networks’, confirm that the actual structure of Twitter is the key to the influence. It is a heterogeneous network, or rather, one where there is a large number of users with very few followers (61 on median, according to O’Reilly), and a few -very few- with an enormous number of followers (up to 40-50 million).

With this type of distribution, network position or ‘topocracy’ comes before meritocracy: “Having a larger number of followers is much more important than the user’s ‘effort’ or activity in sending lots of messages,” Rosa M. Benito, head of the research team, tells SINC.

“However, if the underlying network were homogeneous (something which it is not), users would have approximately the same number of connections and their position on the network would not be important; their influence would depend directly on their activity,” establishes the researcher.

According to the study, on heterogeneous networks like Twitter the way in which users send messages does not matter, because there is always going to be a highly influential minority. Tweets that more popular people or institutions send are spread more and have greater impact, even though they send very few, which is also quite usual.

“The data shows that the emergence of a group of users who write fewer tweets but that are largely retweeted is due to the social network being heterogeneous,” Rosa M. Benito points out.

The researcher is not exactly encouraging for the majority of tweeters who wish to be more influential: “Ordinary users can gain the same number of retweets as popular users by increasing their activity abruptly. Then it is possible to increase their influence through activity, but it is costly and inefficient”.

Courtesy: AlphaGalileo

Computer System Automatically Generates TCP Congestion-Control Algorithms

TCP, the transmission control protocol, is one of the core protocols governing the Internet: If counted as a computer program, it’s the most widely used program in the world.

tcp

One of TCP’s main functions is to prevent network congestion by regulating the rate at which computers send data. In the last 25 years, engineers have made steady improvements to TCP’s congestion-control algorithms, resulting in several competing versions of the protocol: Many Windows computers, for instance, run a version called Compound TCP, while Linux machines run a version called TCP Cubic.

At the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication this summer, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Center for Wireless Networks and Mobile Computing will present a computer system, dubbed Remy, that automatically generates TCP congestion-control algorithms. In the researchers’ simulations, algorithms produced by Remy significantly outperformed algorithms devised by human engineers.

tcp_ip

“I think people can think about what happens to one or two connections in a network and design around that,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science, who co-authored the new paper with graduate student Keith Winstein. “When you have even a handful of connections, or more, and a slightly more complicated network, where the workload is not a constant — a single file being sent, or 10 files being sent — that’s very hard for human beings to reason about. And computers seem to be a lot better about navigating that search space.”

Lay of the land

Remy is a machine-learning system, meaning that it arrives at its output by trying lots of different possibilities, and exploring further variations on those that seem to work best. Users specify certain characteristics of the network, such as whether the bandwidth across links fluctuates or the number of users changes, and by how much. They also provide a “traffic profile” that might describe, say, the percentage of users who are browsing static webpages or using high-bandwidth applications like videoconferencing.

Finally, the user also specifies the metrics to be used to evaluate network performance. Standard metrics include throughput, which indicates the total amount of data that can be moved through the network in a fixed amount of time, and delay, which indicates the average amount of time it takes one packet of information to travel from sender to receiver. The user can also assign metrics different weights — say, reducing delay is important, but only one-third as important as increasing throughput.

Remy needs to test each candidate algorithm’s performance under a wide range of network conditions, which could have been a prohibitively time-consuming task. But Winstein and Balakrishnan developed a clever algorithm that can concentrate Remy’s analyses on cases in which small variations in network conditions produce large variations in performance, while spending much less time on cases where network behavior is more predictable.

They also designed Remy to evaluate possible indicators of network congestion that human engineers have not considered. Typically, TCP congestion-control algorithms look at two main factors: whether individual data packets arrive at their intended destination and, if they do, how long it takes for acknowledgments to arrive. But as it turns out, the ratio between the rates at which packets are sent and received is a rich signal that can dictate a wide range of different behaviors on the sending computer’s end.

Down to cases

Indeed, where a typical TCP congestion-control algorithm might consist of a handful of rules — if the percentage of dropped packets crosses some threshold, cut the transmission rate in half — the algorithms that Remy produces can have more than 150 distinct rules.

“It doesn’t resemble anything in the 30-year history of TCP,” Winstein says. “Traditionally, TCP has relatively simple endpoint rules but complex behavior when you actually use it. With Remy, the opposite is true. We think that’s better, because computers are good at dealing with complexity. It’s the behavior you want to be simple.” Why the algorithms Remy produces work as well as they do is one of the topics the researchers hope to explore going forward.

In the meantime, however, there’s little arguing with the results. Balakrishnan and Winstein tested Remy’s algorithms on a simulation system called the ns-2, which is standard in the field.

In tests that simulated a high-speed, wired network with consistent transmission rates across physical links, Remy’s algorithms roughly doubled network throughput when compared to Compound TCP and TCP Cubic, while reducing delay by two-thirds. In another set of tests, which simulated Verizon’s cellular data network, the gains were smaller but still significant: a 20 to 30 percent improvement in throughput, and a 25 to 40 percent reduction in delay.

courtesy @sciencedaily

Unique Properties of Graphene Lead to a New Paradigm for Low-Power Telecommunications

New research by Columbia Engineering demonstrates remarkable optical nonlinear behavior of graphene that may lead to broad applications in optical interconnects and low-power photonic integrated circuits. With the placement of a sheet of graphene just one-carbon-atom-thick, the researchers transformed the originally passive device into an active one that generated microwave photonic signals and performed parametric wavelength conversion at telecommunication wavelengths.

Ultra-low-power optical information processing is based on graphene on silicon photonic crystal nanomembranes. (Credit: Nicoletta Barolini)

“We have been able to demonstrate and explain the strong nonlinear response from graphene, which is the key component in this new hybrid device,” says Tingyi Gu, the study’s lead author and a Ph.D. candidate in electrical engineering. “Showing the power-efficiency of this graphene-silicon hybrid photonic chip is an important step forward in building all-optical processing elements that are essential to faster, more efficient, modern telecommunications. And it was really exciting to explore the ‘magic’ of graphene’s amazingly conductive properties and see how graphene can boost optical nonlinearity, a property required for the digital on/off two-state switching and memory.”

The study, led by Chee Wei Wong, professor of mechanical engineering, director of the Center for Integrated Science and Engineering, and Solid-State Science and Engineering, will be published online in the Advance Online Publication on Nature Photonics‘s website on July 15 and in print in the August issue. The team of researchers from Columbia Engineering and the Institute of Microelectronics in Singapore are working together to investigate optical physics, material science, and device physics to develop next-generation optoelectronic elements.

They have engineered a graphene-silicon device whose optical nonlinearity enables the system parameters (such as transmittance and wavelength conversion) to change with the input power level. The researchers also were able to observe that, by optically driving the electronic and thermal response in the silicon chip, they could generate a radio frequency carrier on top of the transmitted laser beam and control its modulation with the laser intensity and color. Using different optical frequencies to tune the radio frequency, they found that the graphene-silicon hybrid chip achieved radio frequency generation with a resonant quality factor more than 50 times lower than what other scientists have achieved in silicon.

“We are excited to have observed four-wave mixing in these graphene-silicon photonic crystal nanocavities,” says Wong. “We generated new optical frequencies through nonlinear mixing of two electromagnetic fields at low operating energies, allowing reduced energy per information bit. This allows the hybrid silicon structure to serve as a platform for all-optical data processing with a compact footprint in dense photonic circuits.”

Wong credits his outstanding students for the exceptional work they’ve done on the study, and adds, “We are fortunate to have the expertise right here at Columbia Engineering to combine the optical nonlinearity in graphene with chip-scale photonic circuits to generate microwave photonic signals in new and different ways.”

Until recently, researchers could only isolate graphene as single crystals with micron-scale dimensions, essentially limiting the material to studies confined within laboratories. “The ability to synthesize large-area films of graphene has the obvious implication of enabling commercial production of these proven graphene-based technologies,” explains James Hone, associate professor of mechanical engineering, whose team provided the high quality graphene for this study. “But large-area films of graphene can also enable the development of novel devices and fundamental scientific studies requiring graphene samples with large dimensions. This work is an exciting example of both — large-area films of graphene enable the fabrication of novel opto-electronic devices, which in turn allow for the study of scientific phenomena.”

Commenting on the study, Xiang Zhang, director of the National Science Foundation Nanoscale Science and Engineering Center at the University of California at Berkeley, says, “this new study in integrating graphene with silicon photonic crystals is very exciting. Using the large nonlinear response of graphene in silicon photonics demonstrated in this work will be a promising approach for ultra-low power on-chip optical communications.”

“Graphene has been considered a wonderful electronic material where electron moves like an effectively massless particle in the atomically thin layer,” notes Philip Kim, professor of physics and applied physics at Columbia, one of the early pioneers in graphene research and who discovered its low-temperature high electronic conductivity. “And now, the recent excellent work done by this group of Columbia researchers demonstrates that graphene is also unique electro-optical material for ultrafast nonlinear optical modulation when it is combined with silicon photonic crystal structures. This opens an important doorway for many novel optoelectronic device applications, such as ultrafast chip-scale high-speed optical communications.”

 

Disentangling Information from Photons

Researchers [ Filippo Miatto & co ] from the University of Strathclyde, Glasgow, UK, have found a new method of reliably assessing the information contained in photon pairs used for applications in cryptography and quantum computing. The findings, published in The European Physical Journal D, are so robust that they enable access to the information even when the measurements on photon pairs are imperfect.

The authors focused on photon pairs described as being in a state of quantum entanglement: i.e., made up of many superimposed pairs of states. This means that these photon pairs are intimately linked by common physical characteristics such as a spatial property called orbital angular momentum, which can display a different value for each superimposed state.

Miatto and his colleagues relied on a tool capable of decomposing the photon pairs’ superimposed states onto the multiple dimensions of a Hilbert space, which is a virtual space described by mathematical equations. This approach allowed them to understand the level of the photon pairs’ entanglement.

The authors showed that the higher the degree of entanglement, the more accessible the information that photon pairs carry. This means that generating entangled photon pairs with a sufficiently high dimension — that is with a high enough number of decomposed photon states that can be measured — could help reveal their information with great certainty.

As a result, even an imperfect measurement of photons’ physical characteristics does not affect the amount of information that can be gained, as long as the level of entanglement was initially strong. These findings could lead to quantum information applications with greater resilience to errors and a higher information density coding per photon pair. They could also lead to cryptography applications where fewer photons carry more information about complex quantum encryption keys.

Flexible Channel Width Improves User Experience On Wireless Systems

Researchers from North Carolina State University have developed a technique to efficiently divide the bandwidth of the wireless spectrum in multi-hop wireless networks to improve operation and provide all users in the network with the best possible performance.

“Our objective is to maximize throughput while ensuring that all users get similar ‘quality of experience’ from the wireless system, meaning that users get similar levels of satisfaction from the performance they experience from whatever applications they’re running,” says Parth Pathak, a Ph.D. student in computer science at NC State and lead author of a paper describing the research.

Multi-hop wireless networks use multiple wireless nodes to provide coverage to a large area by forwarding and receiving data wirelessly between the nodes. However, because they have limited bandwidth and may interfere with each other’s transmissions, these networks can have difficulty providing service fairly to all users within the network. Users who place significant demands on network bandwidth can effectively throw the system off balance, with some parts of the network clogging up while others remain underutilized.

Over the past few years, new technology has become available that could help multi-hop networks use their wireless bandwidth more efficiently by splitting the band into channels of varying sizes, according to the needs of the users in the network. Previously, it was only possible to form channels of equal size. However, it was unclear how multi-hop networks could take advantage of this technology, because there was not a clear way to determine how these varying channel widths should be assigned.

Now an NC State team has advanced a solution to the problem.

“We have developed a technique that improves network performance by determining how much channel width each user needs in order to run his or her applications,” says Dr. Rudra Dutta, an associate professor of computer science at NC State and co-author of the paper. “This technique is dynamic. The channel width may change — becoming larger or smaller — as the data travels between nodes in the network. The amount of channel width allotted to users is constantly being modified to maximize the efficiency of the system and avoid what are, basically, data traffic jams.”

In simulation models, the new technique results in significant improvements in a network’s data throughput and in its “fairness” — the degree to which all network users benefit from this throughput.

The researchers hope to test the technique in real-world conditions using CentMesh, a wireless network on the NC State campus.

World’s Smallest Radio Stations

We know since the dawn of modern physics that although events in our everyday life can be described by classical physics, the interaction of light and matter is down deep governed by the laws of quantum mechanics. Despite this century-old wisdom, accessing truly quantum mechanical situations remains nontrivial, fascinating and noteworthy even in the laboratory. Recently, interest in this area has been boosted beyond academic curiosity because of the potential for more efficient and novel forms of information processing.

Artist’s view of a single molecule sending a stream of single photons to a second molecule at a distance, in quantum analogy to the radio communication between two stations. (Credit: Robert Lettow)

In one of the most basic proposals, a single atom or molecule acts as a quantum bit that processes signals that have been delivered via single photons. In the past twenty years scientists have shown that single molecules can be detected and single photons can be generated. However, excitation of a molecule with a photon had remained elusive because the probability that a molecule sees and absorbs a photon is very small. As a result, billions of photons per second are usually impinged on a molecule to obtain a signal from it.

One common way to get around this difficulty in atomic physics has been to build a cavity around the atom so that a photon remains trapped for long enough times to yield a favorable interaction probability. Scientists at ETH Zurich and Max Planck Institute for the Science of Light in Erlangen have now shown that one can even interact a flying photon with a single molecule. Among many challenges in the way of performing such an experiment is the realization of a suitable source of single photons, which have the proper frequency and bandwidth. Although one can purchase lasers at different colors and specifications, sources of single photons are not available on the market.

So a team of scientists led by Professor Vahid Sandoghdar made its own. To do this, they took advantage of the fact that when an atom or molecule absorbs a photon it makes a transition to a so-called excited state. After a few nanoseconds (one thousand millionth of a second) this state decays to its initial ground state and emits exactly one photon. In their experiment, the group used two samples containing fluorescent molecules embedded in organic crystals and cooled them to about 1.5 K (-272 °C). Single molecules in each sample were detected by a combination of spectral and spatial selection.

To generate single photons, a single molecule was excited in the “source” sample. When the excited state of the molecule decayed the emitted photons were collected and tightly focused onto the “target” sample at a distance of a few meters. To ensure that a molecule in that sample “sees” the incoming photons, the team had to make sure that they have the same frequency. Furthermore, the precious single photons had to interact with the target molecule in an efficient manner. A molecule is about one nanometer is size (100000 times smaller than the diameter of a human hair) but the focus of a light beam cannot be smaller than a few hundred nanometers.

This usually means that most of the incoming light goes around the molecule, i.e. without them seeing each other. However, if the incoming photons are resonant with the quantum mechanical transition of the molecule, the latter acts as a disk that is comparable to the area of the focused light. In this process the molecule acts as an antenna that grabs the light waves in its vicinity. The results of the study published inPhysical Review Letters provide the first example of long-distance communication between two quantum optical antennas in analogy to the 19th century experiments of Hertz and Marconi with radio antennas. In those early efforts, dipolar oscillators were used as transmitting and receiving antennas.

In the current experiment, two single molecules mimic that scenario at optical frequencies and via a nonclassical optical channel, namely a single-photon stream. This opens many doors for further exciting experiments in which single photons act as carriers of quantum information to be processed by single emitters.

Hybrid Technology for Quantum Information Systems

The merging of two technologies under development — plasmonics and nanophotonics — is promising the emergence of new “quantum information systems” far more powerful than today’s computers.The technology hinges on using single photons — the tiny particles that make up light — for switching and routing in future computers that might harness the exotic principles of quantum mechanics.

Structures called “metamaterials” and the merging of two technologies under development are promising the emergence of new “quantum information systems” far more powerful than today’s computers. The concept hinges on using single photons – the tiny particles that make up light – for switching and routing in future computers that might harness the exotic principles of quantum mechanics. The image at left depicts a “spherical dispersion” of light in a conventional material, and the image at right shows the design of a metamaterial that has a “hyperbolic dispersion” not found in any conventional material, potentially producing quantum-optical applications. (Credit: Zubin Jacob)

The quantum information processing technology would use structures called “metamaterials,” artificial nanostructured media with exotic properties.

The metamaterials, when combined with tiny “optical emitters,” could make possible a new hybrid technology that uses “quantum light” in future computers, said Vladimir Shalaev, scientific director of nanophotonics at Purdue University’s Birck Nanotechnology Center and a distinguished professor of electrical and computer engineering.

The concept is described in an article published on October 28 in the journal Science. The article appeared in the magazine’s Perspectives section and was written by Shalaev and Zubin Jacob, an assistant professor of electrical and computer engineering at the University of Alberta, Canada.

“A seamless interface between plasmonics and nanophotonics could guarantee the use of light to overcome limitations in the operational speed of conventional integrated circuits,” Shalaev said.

Researchers are proposing the use of “plasmon-mediated interactions,” or devices that manipulate individual photons and quasiparticles called plasmons that combine electrons and photons.

One of the approaches, pioneered at Harvard University, is a tiny nanowire that couples individual photons and plasmons. Another approach is to use hyperbolic metamaterials, suggested by Jacob; Igor Smolyaninov, a visiting research scientist at the University of Maryland; and Evgenii Narimanov, an associate professor of electrical and computer engineering at Purdue. Quantum-device applications using building blocks for such hyperbolic metamaterials have been demonstrated in Shalaev’s group.

“We would like to record and read information with single photons, but we need a very efficient source of single photons,” Shalaev said. “The challenge here is to increase the efficiency of generation of single photons in a broad spectrum, and that is where plasmonics and metamaterials come in.”

Today’s computers work by representing information as a series of ones and zeros, or binary digits called “bits.”

Computers based on quantum physics would have quantum bits, or “qubits,” that exist in both the on and off states simultaneously, dramatically increasing the computer’s power and memory. Quantum computers would take advantage of a strange phenomenon described by quantum theory called “entanglement.” Instead of only the states of one and zero, there are many possible “entangled quantum states” in between one and zero.

An obstacle in developing quantum information systems is finding a way to preserve the quantum information long enough to read and record it. One possible solution might be to use diamond with “nitrogen vacancies,” defects that often occur naturally in the crystal lattice of diamonds but can also be produced by exposure to high-energy particles and heat.

“The nitrogen vacancy in diamond operates in a very broad spectral range and at room temperature, which is very important,” Shalaev (Scientist) said.