Category Archives: Uncategorized

Content-Centric Networking

In today’s Internet, there is only one kind of data packet—one that carries both content and requests for content between users. But in a CCN network, there are two types: content packets and interest packets. They work together to bring information to users. Content packets are most like traditional data packets. The bits in a content packet may specify part of an ad on a Web page, a piece of a photo in an article, or the first few seconds of a video. Interest packets, on the other hand, are like golden retrievers that a user sends out onto the network to find a specific content packet and bring it back.

When you visit a Web page, your computer needs to fetch about 100 pieces of content on average. A piece of content could be a block of text, a photo, or a headline. With CCN, when you navigate to a website or click on a link, you automatically send out interest packets to specify the content you would like to retrieve. Typing in a single URL, or Web address, can trigger a user’s browser to automatically send out hundreds of interest packets to search for the individual components that make up that page.

Both interest and content packets have labels, each of which is a series of bits that indicate which type of packet it is, the time it was generated, and other information. The label on a content packet also includes a name that designates what bits of content it holds, while the label on an interest packet indicates which content it wishes to find. When a user clicks on a link, for example, and generates a flurry of interest packets, the network searches for content packets with matching names to satisfy that request.

The name on a packet’s label is called a uniform resource identifier (URI), and it has three main parts. The first part is a prefix that routers use to look up the general destination for a piece of content, and the second part describes the specific content the packet holds or wishes to find. The third part lists any additional information, such as when the content was created or in what order it should appear in a series.

Suppose a Web surfer’s browser is using CCN to navigate to this article on IEEE ­Spectrum’s website. The network must find and deliver all the content packets that make up the complete article. To make that process easier, URIs use a hierarchical naming system to indicate which packets are needed for the page, and in what order. For example, one content packet might be named spectrum.ieee/2017/April/ver=2/chunk=9:540. In this example, spectrum.ieee is the routable prefix for the second version of the article, and the specific packet in question is the ninth packet of 540 that make up the complete article.

Once a CCN user has clicked that link or typed it in as a Web address, the user’s machine dispatches an interest packet into the network in search of that content, along with other interest packets to search for packets 10 and 11. As the interest packet for number 9 travels along, each router or server it encounters must evaluate that interest packet and determine whether it holds the content packet that can satisfy its request. If not, that node must figure out where in the network to forward the interest packet next.

To do all of this, every node relies on a system known as a CCN forwarder. The forwarder operates on components that are similar to what you’d find in a traditional router. A CCN forwarder requires a processor, memory, and storage to manage requests. The forwarder also runs a common software program called a forwarding engine. The forwarding engine decides where to store content, how to balance loads when traffic is heavy, and which route between two hosts is best.

The forwarding engine in a CCN network has three major components: the content store, the pending interest table, and the forwarding information base. Broadly speaking, CCN works like this: A node’s forwarding engine receives interest packets and then checks to see if they are in its content store. If not, the engine next consults the pending interest table and, as a last resort, searches its forwarding information base. While it’s routing information, the engine also uses algorithms to decide which content to store, or cache, for the future, and how best to deliver content to users.

To understand how that system improves on our existing Internet protocols, consider what happens when a new interest packet arrives at a node. The forwarding engine first looks for the content in the content store, which is a database that can hold thousands of content packets in its memory for quick and easy access, like the cache memory in a conventional router. But CCN has a key difference. Unlike the traditional Internet protocols, which permit content to be stored only with the original host or on a limited number of dedicated servers, CCN permits any node to copy and store any content anywhere in the network.

To return to our example, if the forwarder finds the content it’s looking for in the node’s content store, the system sends that content packet back to the user through the same “face,” or gateway, by which the interest packet entered the system. However, when an interest packet arrives, that node might not hold a copy of the needed content in its content store. So for its next step, the forwarding engine consults the pending interest table, a logbook that keeps a running tally of all the interest packets that have recently traveled through the node and what content they were seeking. It also notes the gateway through which each interest packet arrived and the gateway it used to forward that content along.

By checking the pending interest table (PIT) whenever a new interest packet arrives, the forwarding engine can see whether it has recently received any other interest packets for the same—or similar—content. If so, it can choose to forward the new interest packet along the exact same route. Or it can wait for that content to travel back on its return trip, make a copy, and then send it to all users who expressed interest in it.

The idea here is that these PIT records create a trail of bread crumbs for each interest packet, tracing its route through the network from node to node until it finds the content it’s seeking. This is very different from conventional networks, where routers immediately “forget” information they’ve forwarded. Then, that forwarder consults the PIT at each node to follow the reverse path back to the original requester.

Suppose, though, that an interest packet arrives at a node and the forwarding engine can’t find a copy of the requested content in its content store, nor any entry for it in the pending interest table. At this point, the node turns to the forwarding information base—its last resort when trying to satisfy a new request.

Ideally, the forwarding information base (FIB) is an index of all the URI prefixes, or routable destinations, in the entire network. When an interest packet arrives, the forwarding engine checks this index to find the requested content’s general whereabouts. Then it sends the interest packet through whatever gateway will move it closer to that location and adds a new entry to the pending interest table for future reference. In reality, the FIB for the entire Internet would be too large to store at every node, so just like today’s routing tables, it is distributed throughout the network.

In a traditional network, routers perform a similar search to find the IP address of the server that holds the bits of information a user wishes to retrieve and figure out which gateway to send the request through. The difference here is that with CCN, the forwarding information base finds the current location of the information itself on the network rather than the address of the server where it’s stored.

By focusing on the location of content rather than tracking down the address of its original host, a CCN network can be more nimble and responsive than today’s networks. In fact, our studies indicate that the CCN model will outperform [PDF] traditional IP-based networks in three key aspects: reliability, scalability, and security.

CCN improves reliability by allowing any content to be stored anywhere in the network. This feature is particularly useful in wireless networks at points where bit-error rates tend to be high, such as when data is transmitted from a smartphone to a cell tower, or broadcast from a Wi-Fi access point. Current Internet protocols leave error recovery to higher levels of the protocol stack. By keeping a copy of a content packet for a short while after sending it along, a CCN node reduces the upstream traffic for packets that need to be retransmitted. If a packet fails to transmit to the next node, the previous node does not need to request it again from the original host because it has its own copy on hand to retransmit.

The pending interest table can also make it easier for networks to scale. By grouping similar interest packets together, it can reduce the bandwidth needed to satisfy each request. Instead of sending a new request back to the original host for each identical interest packet that arrives, a node could satisfy all those requests for interest packets with identical copies of the content it has stored locally. If the record shows that there has been a lot of demand for a viral cat video, the algorithms within that node may prompt it to keep an extra copy of all those packets in its content store to more quickly satisfy future requests.

Boosting reliability and making it easier to scale networks are two important benefits. But to us, the most important advantage of CCN is the extra securityit offers. In traditional networks, most security mechanisms focus on protecting routes over which information travels (similar to the strategies used in early ­circuit-switched telephone networks). In contrast, CCN protects individual packets of information, no matter where they flow.

Currently, two users can establish a secure connection through established Internet protocols. The two most common of these are HTTPS and Transport Layer Security. With HTTPS, a user’s system examines a digital certificate issued by a third party, such as Symantec Corp., to verify that the other user is who she claims to be. Through TLS, users negotiate a set of cryptographic keys and encryption algorithms at the start of each session that they both use to transfer information securely to each other.

With CCN, every content packet is encrypted by default, because each content packet also comes with a digital signature to link it back to its original creator. Users can specify in their interest packets which creator they would like to retrieve content from (for example, Netflix). Once they find a content packet with that creator’s matching signature, they can check that signature against a record maintained by a third party to verify that it is the correct signature for that piece of content.

With this system in place, creators can allow other users to copy and store their content, because packets will always remain encrypted and verifiable. As long as users can verify the signature, they know that the content packet originated with the creator and that users can securely access the content—a motion picture, say—from anywhere it happens to be.

This security feature brings another bit of good news: Distributed denial-of-service attacks—in which hackers send a large volume of requests to a website or server in order to crash it—are more difficult to execute in CCN. Unusual traffic patterns are easier to discern in a CCN network and can be shut down quickly. On the other hand, clever attackers may just try to figure out a way to flood the network with interest packets instead. This security challenge would have to be solved before CCN could be widely adopted.

Another significant challenge [PDF] is figuring out how to integrate CCN’s protocols into routers running at the speeds used on current networks. Analysts are especially concerned that routers in a CCN system would have to store rather large FIB and PIT tables to track the many moving content objects on the network, which will present major computational and memory-related challenges. However, researchers are now working on this problem at Cisco, Huawei, PARC, and Washington University in St. Louis, which have all demonstrated prototype routers supporting various elements of the CCN protocols.

 

Networks Take to Space

Quantum physics makes possible a strange phenomenon known as entanglement. Essentially, two or more particles such as photons that get linked or “entangled” can, in theory, influence each other simultaneously no matter how far apart they are. Entanglement is essential to the workings of quantum computers, the networks that would connect them, and the most sophisticated kinds of quantum cryptography—a theoretically unhackable means of information exchange.

Back in 2012, Pan Jianwei, a quantum physicist at the University of Science and Technology of China at Hefei, and his colleagues set the distance record for quantum entanglement. A particle on one side of China’s Qinghai Lake influenced one on the other side, 101.8 kilometers away. However, entanglement gets easily disrupted by interference from the environment, and this fragility has stymied efforts at greater distance records on Earth.

Now, Pan and his colleagues have set a new record for entanglement by using a satellite to connect sites on Earth separated by up to 1,203 km. The main advantage of a space-based approach is that most of the interference that entangled photons face occurs in the 10 km or so of atmosphere closest to Earth’s surface. Above that, the photons encounter virtually no problems, the researchers say.

The researchers launched the quantum science experiment satellite (nicknamed Micius) from Jiuquan, China, in 2016. It orbits the planet at a speed of roughly 28,800 kilometers per hour and an altitude of roughly 500 km. “Through ground-based feasibility studies, we gradually developed the necessary toolbox for the quantum science satellite,” Pan says.

The experiments involved communications between Micius and three ground stations across China. Beacon lasers on both the transmitters and receivers helped them lock onto each other.

Micius generated entangled pairs of photons and then split them up, beaming the members of a pair to separate ground stations. The distance between the satellite and the ground stations varied from 500 to 2,000 km.

The record distance involved photons beamed from Micius to stations in the cities of Delingha and Lijiang. The experiments transmitted entangled photons with a 1017 greater efficiency than the best optical fibers can achieve. “We have finally sent entanglement into space and established a much, much larger quantum optics laboratory, which provides us a new platform for quantum networks as well as for probing the interaction of quantum mechanics with gravity,” Pan says.

Although these experiments generated roughly 5.9 million entangled pairs of photons every second, the researchers were able to detect only about one pair per second. Pan’s team expects a thousandfold improvement in this rate “in the next five years,” he says. He also notes that the current transmission rate for entangled pairs is close to what’s necessary to provide quantum cryptography for very brief texts; five years from now, networks of satellites and ground stations could successfully transmit at megahertz rates.

In another study, researchers in Germany found they could measure the quantum features of laser signals transmitted by a satellite a record 38,600 km away. These findings suggest that satellites could play a role in quantum networks that use less sophisticated forms of quantum cryptography that do not rely on entanglement.

Quantum physicist Christoph Marquardt from the Max Planck Institute for the Science of Light, in Erlangen, Germany, and his colleagues experimented with the Alphasat I-XL satellite, which is in geostationary orbit. Alphasat used laser signals to communicate with a ground station at the Teide Observatory in Tenerife, Spain.

Marquardt notes that the laser communications technology they experimented with is already used commercially in space. That, combined with the success of his and his colleagues’ experiments, suggests that quantum networks that do not rely on entanglement could be set up “as soon as five years from now,” he says.

The Decentralized Internet

The vision, to create a peer-to-peer Internet that is free from firewalls, government regulation, and spying, is one shared by the Decentralized Web movement. It isn’t exactly a new idea. In the real world, the Decentralized Web movement has been working for a couple of years to link people interested in advancing the effort, and pieces of the technology are being developed in various corporate and university labs. Making a true decentralized Web—or decentralized Internet (the two are a little different)—isn’t going to be fast or easy, Decentralized Web evangelist and Internet Archive founder Brewster Kahle told me last month, because, although it is a good idea, it is hard to execute.

At the University of Michigan, Robert Dick, associate professor of electrical engineering and computer science, and a team of two doctoral students and more than a dozen undergraduate volunteers have been focused for seven years on designing and implementing a decentralized network. If the Internet is shut down or blocked in some way, it maintains connections by sending data hopping from phone to phone. The team will this year roll out an app called Anonymouse that allows anonymous microblogs, including images and text, to be sent around its network by phone hopping. There’s no ability to serve Web pages just yet, although that feature is on the road map. Dick says he’s aware of—and may end up teaming with—other organizations working on similar technology.

“We deployed a research prototype with 100 people using it at the University of Michigan,” Dick said. “Messages typically reached 80 percent of the participants in a day. That surprised us; we didn’t think it would work until we reached about 5 percent of the university population—over 2,000 people. We were also surprised by how quickly messages were ferried between campuses, as a user passed another and then got on a bus, for example.”

The challenge, Dick indicated, is getting enough people to use it so it works efficiently. That’s the challenge the fictional Hendricks is wrestling with as well. Hendricks said in a recent episode, “People won’t want to participate until the quality is high, and the quality won’t be high until we got a lot of people.” Hendricks proposes offering users free data compression to encourage them to run the app; Dick intends to target markets where fear of censorship is a very big deal, so the app will be its own reason for existence. For now, the group is taking sign-ups for a public beta to start in the third quarter of this year.

“I can’t get people in the U.S. to understand just how awful it can be when a few people control what everybody else can say,” Dick told me. “However, I can try to make sure that, if and when they understand, the technology to help them will be available.”

Taking a different approach to reinventing the Internet is MaidSafe, a company based in Troon, Scotland, that has been working since 2006 to create a “massive array of interconnected discs secure access for everyone.”

MaidSafe chief operating officer Nick Lambert says that the company’s goal is to decentralize all Web services, with users offering up bandwidth and storage space in exchange for a cryptographic token the company calls SafeCoin that users can then use to pay for network services or convert into cash. The data travels across the existing Internet, but it is more secure because it is broken into chunks and encrypted. One password is required to store and retrieve data, plus another one to encrypt and decrypt it, because the data is distributed and because the network, although it travels across the same wires as the Internet, doesn’t use the same addressing system. The only way to prevent access to the data would be by shutting down the Internet, Lambert says.

The company recently tested the system on a 100-node network running inside a data center, then rolled it out to several hundred alpha users using laptop and desktop computers. These users were given demo apps to run that allow them to create websites and send email; eventually, the company hopes a developer community focused on building out more sophisticated apps will emerge. Lambert says the company is currently developing better support for mobile devices for its next alpha test, and then it will move on to testing network recovery from massive power outages in alpha 3. Lambert says a more extensive beta test is on the horizon for the near future.

Contest Solve Big Internet Challenges

The Internet of refrigerators is, of course, fiction. But could an Internet that is this resilient—or nearly so—be a reality? Mozilla and the U.S. National Science Foundation think it’s possible, and aim to accelerate its creation by offering $2 million in prize money to teams that invent it—or at least get close.

“We’ve picked two of the most challenging situations in which people are disconnected from the Internet,” Mozilla program manager Mehan Jayasuriya told me. These are, “Connecting people in the U.S. who don’t have reliable or affordable Internet and connecting people as quickly as possible after a major disaster, when the traditional networks go down.”

 Mozilla and the NSF are addressing that first group—an estimated 34 million people—with the “Smart-Community Networks Challenge” that seeks wireless technology designed to enhance Internet connectivity by building on top of existing infrastructure.

For the second group, there’s the “Off-The-Grid Internet Challenge.” That contest seeks technology that can be quickly deployed after a disaster to allow people to communicate when Internet access is gone.

The teams submit initial designs, and then later, working prototypes. Prizes at the design stage range from $10,000 to $60,000. At the working prototype stage, the stakes range from $50,000 to $400,000, with one of the top awards given for each challenge category.

Judging criteria for both challenges include affordability, feasibility, social impact, and scalability. Off-the-grid technology also has to be portable and have a portable power source. The smart community networks technology will also consider density, range, bandwidth, and security. Potential entrants must submit an intent to apply form by 15 October; the whole thing wraps up next August.

“A lot of projects out there address some parts of these problems,” Jayasuriya says. “With $2 million on the table, we are hoping this challenge encourages people to fill their technologies out.”

Were Pied Piper a real company, it would have a decent chance at winning some of that cash. Says Jayasuriya:It’s the kind of thing we are looking for—a big idea, a crazy idea, an idea about how you piggyback on things that already exist. Pied Piper’s approach is like that, looking at all the phones out there and thinking that these phones have radios, and power, and CPUs, so why wouldn’t you take them and turn them into nodes on a network.

Browser Fingerprinting Tech

Browser fingerprinting is an online tracking technique commonly used to authenticate users for retail and banking sites and to identify them for targeted advertising. By combing through information available from JavaScript and the Flash plugin, it’s possible for third parties to create a “fingerprint” for any online user.

That fingerprint includes information about users’ browsers and screen settings—such as screen resolution or which fonts they’ve installed—which can then be used to distinguish them from someone else as they peruse the Web.

In the past, though, these techniques worked only if people continued to use the same browser—once they switched, say, to Firefox from Safari, the fingerprint was no longer very useful. Now, Cao’s method allows third parties to reliably track users across browsers by incorporating several new features that reveal information about their devices and operating systems.

Cao, along with his colleagues at Lehigh and Washington University, in St. Louis, began creating their tech by first examining the 17 features included in AmIUnique, the popular single-browser fingerprinting system, to see which ones might also work across browsers.

For example, one feature that AmIUnique relies on is screen resolution. Cao found that screen resolution can actually change for users if they adjust their zoom levels, so it’s not a very reliable feature for any kind of fingerprinting. As an alternative, he used a screen’s ratio of width to height because that ratio remains consistent even when someone zooms in.

Cao borrowed or adapted four such features from AmIUnique for his own cross-browser technique, and he also came up with several new features that revealed details about users’ hardware or operating systems, which remain consistent no matter which browser they open.

The new features he developed include an examination of a user’s audio stack, graphics card, and CPU. Overall, he relied on a suite of 29 features to create cross-browser fingerprints.

To extract that information from someone’s computer, Cao wrote scripting languages that force a user’s system to perform 36 tasks. The results from these tasks include information about the system, such as the sample rate and channel count in the audio stack. It takes less than a minute for the script to complete all 36 tasks.

To test the accuracy of his 29-point method, Cao recruited 1,903 helpers from Amazon Mechanical Turks and Microworkers. He asked them to visit a website from multiple browsers and found that the method worked across many popular browsers, including Google Chrome, Internet Explorer, Safari, Firefox, Microsoft Edge Browser, and Opera, as well as a few obscure ones, such as Maxthon and Coconut.

The only browser that his method didn’t work on was Tor. Earlier this month, Cao published the open source code for his technique so that anyone could use it. His next step? To work on more ways that users can avoid being fingerprinted across browsers, should they wish to opt out.

The Push for a Decentralized Web

The first episode of the new season (Season 4) of HBO’s “Silicon Valley,” beleaguered entrepreneur Richard Hendricks, asked by eccentric venture capitalist Russ Hanneman, what, given unlimited time and resources, he would want to build.

“A new Internet,” says Hendricks.

“Why?” asks Hanneman.

Hendricks babbles about telescopes and the moon landing and calculators and the massive computing power in phones today, and says: “What if we used all those phones to build a massive network?… We use my compression algorithm to make everything small and efficient, to move things around…. If we could do it, we could build a completely decentralized version of our current Internet with no firewalls, no tolls, no government regulation, no spying. Information would be totally free in every sense of the word.”

Hel-lo! Decentralized Internet? That’s a concept I’ve heard bubbling around the tech world for a while now, but not so much in the consciousness of the general public. Is HBO’s “Silicon Valley” about to take the push for a Decentralized Web mainstream? And is what Hendricks talks about on the show really what the Decentralized Web is all about?

I contacted Brewster Kahle, founder of the Internet Archive and the pioneer of the Decentralized Web movement: he first pitched the idea of a Decentralized Web in February 2015, initially describing it as “Locking the Web Open,” at the first meeting hosted by NetGain, a partnership of some of the largest U.S. foundations aimed at strengthening digital society. In August of that year he published a manifesto (he calls it a white paper) making a detailed case for the Decentralized Web, and in June 2016 he hosted a conference to bring key potential players together to move the project forward.

The Decentralized Web, he told me, “would be everywhere and nowhere. There would be no web servers, it would be a peer-to-peer backend, so if any piece of hardware went down, you wouldn’t lose websites. It would be more like the Internet itself is today—if a piece goes down, you can route around the problem. The current Web isn’t like that.

“Today, if you stand in front of a website, you can tell all the traffic going to it. We know that GCHQ, the NSA of the United Kingdom, recorded all the IP addresses going into WikiLeaks.”

This kind of thing, he says, “would be far more difficult in a decentralized world.”

Is that what the fictional Hendricks was talking about? Kahle, who watched the episode, says yes, mostly.

“He says one of the things it would get you is privacy, and it certainly would,” says Kahle. “He also mentioned that it would start to get around firewalls, like the great firewall of China. And it could do that; if someone behind the firewall had read a website, someone else could get it from them.”

Translating the “no tolls” part of Hendricks’ vision into the real world is a little tricky. If by “no tolls,” he was referencing the current debates over net neutrality, that is, whether Internet providers should be allowed to charge content providers for the use of fast lanes, the Decentralized Web would definitely blow those virtual tollbooths out of the road. If, instead, Kahle says, “no tolls” means no paywalls, not so much. Indeed, the vision of the Decentralized Web involves making it easier to pay for content in order to let readers support publishers, instead of just advertisers.

 

Wireless network in Afghanistan

In May 2017, Afghan Wireless announced a milestone—the company had launched the first 4G LTE service in Afghanistan. That service is now live in Kabul, and the company plans to extend 4G LTE to the entire country within the next 12 to 18 months.

Building or upgrading a reliable , where road access is limited and power is no guarantee, poses a unique set of challenges. The job has been made far more difficult by the U.S.-led war in Afghanistan, which lasted for 13 years from 2001 to 2014, and ongoing fighting among Taliban insurgents, the Islamic State group, and remaining troops.

On Wednesday, a truck bomb exploded in central Kabul, killing 80 people and wounding hundreds. Such violence reshapes many aspects of daily life for Afghanistan’s 33 million residents. For Afghan Wireless, it also presents major operational hurdles.

Access to spectrum is extremely limited, because the government and military have reserved so much of it for their own use. For its new 4G LTE network, Afghan Wireless plans to operate at 1800 GHz. Mike Hoban, chief technology officer at Afghan Wireless, says the company is looking forward to future spectrum auctions so it can purchase additional licenses for its next upgrade to LTE-Advanced.

Lately, Afghan Wireless towers have become popular targets for insurgents and thieves, requiring the company to post three to six guards at every single tower they install to keep watch around the clock. “We’ve lost a lot of towers,” says Amin Ramin, managing director of Afghan Wireless Communications Company.

The company was founded in 2002, just after the U.S.-led war began. Since then, its reach has grown considerably from the 34 towers the company used to launch a nationwide 2G network to originally serve around 50,000 customers.

For one early project, Afghan Wireless wished to link northern Afghanistan to its network in Kabul. To do that, they wanted to install a microwave tower in Salang Pass, a mountain pass that serves as a gateway to northern provinces.

Even once the company has built new roadways, the roads wash out on a regular basis, requiring it to rebuild some roads every year. And Ramin says most of the diesel fuel they use to power generators for each tower comes from Iran or Pakistan. Some of that fuel has a very high water content, which destroys the generators.

Still, the company has managed to construct an expansive network consisting of 1,400 towers that serve more than 5 million Afghans from all 34 provinces. Ramin says about 65 of those towers provide backhaul, and in some cases are separated by as many as 200 kilometers.

As the company upgrades its services to 4G, the network will require even more towers. Ramin says that for towers that directly serve customers, a 2G network could function if the towers were placed 6 to 7 kilometers from each other. But for 4G, the towers may need to be as close as 100 to 300 meters.

Hoban says this is particularly true in Afghanistan because the walls of homes and buildings are extremely thick, which makes it more difficult for signals to get through. “It’s local brick, it’s local cement, and they’re thick as hell,” he says.

To build out 4G LTE, Ramin says Afghan Wireless will install 150 new towers in Kabul, and as many as 500 across the country. Much of the network is built on equipment from Huawei, a Chinese manufacturer. Today, the company also operates 350 Wi-Fi hotspots for customers in Kabul.

Hoban says Afghan Wireless’ 3G network averages around 35 megabits per second per sector, and he hopes to boost that to 75 megabits per second this year with 4G, and then up to 175 or 200 megabits per second next year by rolling out 4G LTE-Advanced. A typical base station serves three sectors.

In addition to serving its customers, Afghan Wireless also claims to be the largest private employer in Afghanistan, with more than 8,000 employees across the country. That workforce includes more than 300 women, Ramin says, which makes it the largest private employer of women in Afghanistan. The company provides training and tuition assistance to employees.

The internet is not the opioid crisis

The internet is not the opioid crisis; it is not likely to kill you (unless you’re hit by a distracted driver) or leave you ravaged and destitute. But it requires you to focus intensely, furiously, and constantly on the ephemera that fills a tiny little screen, and experience the traditional graces of existence — your spouse and friends and children, the natural world, good food and great art — in a state of perpetual distraction.

Used within reasonable limits, of course, these devices also offer us new graces. But we are not using them within reasonable limits. They are the masters; we are not. They are built to addict us, as the social psychologist Adam Alter’s new book “Irresistible” points out — and to madden us, distract us, arouse us and deceive us. We primp and perform for them as for a lover; we surrender our privacy to their demands; we wait on tenterhooks for every “like.” The smartphone is in the saddle, and it rides mankind.

Which is why we need a social and political movement — digital temperance, if you will — to take back some control.

No, not like Prohibition. Temperance doesn’t have to mean teetotaling; it can simply mean a culture of restraint that tries to keep a specific product in its place. And the internet, like alcohol, may be an example of a technology that should be sensibly restricted in custom and in law.

Of course it’s too soon to fully know (and indeed we can never fully know) what online life is doing to us. It certainly delivers some social benefits, some intellectual advantages, and contributes an important share to recent economic growth.

But there are also excellent reasons to think that online life breeds narcissism, alienation and depression, that it’s an opiate for the lower classes and an insanity-inducing influence on the politically-engaged, and that it takes more than it gives from creativity and deep thought. Meanwhile the age of the internet has been, thus far, an era of bubbles, stagnation and democratic decay — hardly a golden age whose customs must be left inviolate.

So a digital temperance movement would start by resisting the wiring of everything, and seek to create more spaces in which internet use is illegal, discouraged or taboo. Toughen laws against cellphone use in cars, keep computers out of college lecture halls, put special “phone boxes” in restaurants where patrons would be expected to deposit their devices, confiscate smartphones being used in museums and libraries and cathedrals, create corporate norms that strongly discourage checking email in a meeting.

Then there are the starker steps. Get computers — all of them — out of elementary schools, where there is no good evidence that they improve learning. Let kids learn from books for years before they’re asked to go online for research; let them play in the real before they’re enveloped by the virtual.

Then keep going. The age of consent should be 16, not 13, for Facebook accounts. Kids under 16 shouldn’t be allowed on gaming networks. High school students shouldn’t bring smartphones to school. Kids under 13 shouldn’t have them at all. If you want to buy your child a cellphone, by all means: In the new dispensation, Verizon and Sprint will have some great “voice-only” plans available for minors.

I suspect that versions of these ideas will be embraced within my lifetime by a segment of the upper class and a certain kind of religious family. But the masses will still be addicted, and the technology itself will have evolved to hook and immerse — and alienate and sedate — more completely and efficiently.

Apple Experiencing

Apple is not alone. Other aging tech giants like Microsoft, Amazon and Alphabet, the parent of Google, and younger players like Facebook have also managed to post strong growth despite their tremendous size. The secret to their vigor, according to analysts and investors, is the vast amount of data they have about customers and their ability to sell all sorts of products to those customers.

“This handful of companies is writing the operating system for the new economy,” said Brad Slingerlend, lead portfolio manager of Janus Henderson’s global technology fund. “The bigger companies are both able to collect data and use that data to build into adjacent businesses.”

For Apple, which is far more dependent on hardware sales than other tech leaders, the recent performance is all the more impressive after its dismal 2016, when quarterly revenue fell for the first time in 13 years and the company’s sales in China dropped through the floor.

In the most recent quarter, which ended July 1, Apple actually sold 2 percent more iPhones than it did during the same period last year, defying the usual sales slump that occurs before its new phones are introduced. The business in China stabilized.

The iPad, a product line that was collapsing amid the rise of big-screen smartphones, rebounded, with the number of tablets sold increasing 15 percent as Apple cut prices at the low end and added features at the high end. The company’s redesigned iMacs and MacBook Pros also gained market share in the slowly declining personal computer industry.

In a harbinger of the company’s future, digital services — the App Store, iCloud, movie and music downloads, and the Apple Music streaming service — have become the second most important category for the company, growing 22 percent to $7.3 billion in the quarter. With 1.2 billion iPhones sold and millions of new customers joining the iPhone ecosystem each year, Apple is in a position to increase its income from services much faster than from accessories like the Watch.

“Wall Street is waking up to the reality that the next great product might not be an Apple car or the TV or the Watch,” said Trip Miller of Gullane Capital Partners, which loaded up on Apple shares when they were below $100. “The services business is the next great product.”

Mr. Miller, who also owns stock in Alphabet and Amazon, said that part of what makes these companies so powerful is their strong balance sheets, with seemingly limitless cash and borrowing capacity. That has allowed Alphabet to slowly build YouTube’s advertising business and move into self-driving cars.

Even Facebook, which could sell more ads on its social network than it is willing to put into the news feed, is generating so much cash that it can afford to slowly increase ads on Instagram and its Messenger network while absorbing losses from its Oculus virtual reality business.

Apple’s sales have tended to be more cyclical, with big upgrades to its iPhones typically occurring every two years. The company is a year behind schedule this time, with its last major update in 2014. That has investors excited about the potential for a large spike in demand when the new phones come out this fall.

“Any product they release this year would be successful. There is pent-up replacement demand,” said Amit Daryanani, a hardware analyst with RBC Capital Markets. “Most of us are modeling 11 to 12 percent revenue growth in 2018, a super cycle.”

But he said such growth is unlikely to continue in 2019, when excitement about the new iPhones has faded.

Still, Apple’s financial heft is likely to get it through any rough patches. It had $45 billion in revenue last quarter and is now sitting on $262 billion in cash and marketable securities.

.

China’s Internet

Thursday a new way of shutting down websites and cutting off the country’s internet users from the rest of the world. The censorship drill targeted tools that many in China use to thwart the country’s vast online censorship system, though internet companies said it also hit some sites at random.

One Beijing online video company watched as its app and website went offline for about 20 minutes without warning. The way it was disconnected — the digital tether that connected its service to the rest of the internet was severed — suggested more than a mere technical outage, according to the leader of the firm’s technology team, who requested anonymity for himself and the company for fear of reprisals.

Chinese officials did not comment on the test, and there was no indication that they would use the system again. But if they do, it may not be a total surprise.

China has embarked on an internet campaign that signals a profound shift in the way it thinks of online censorship. For years, the China government appeared content to use methods that kept the majority of people from reading or using material it did not like, such as foreign news outlets, Facebook and Google. For the tech savvy or truly determined, experts say, China often tolerated a bit of wiggle room, leading to online users’ playing a cat-and-mouse game with censors for more than a decade.

Now the authorities are targeting the very tools many people use to vault the Great Firewall. In recent days, Apple has pulled apps that offer access to such tools — called virtual private networks, or VPNs — off its China app store, while Amazon’s Chinese partner warned customers on its cloud computing service against hosting those tools on their sites. Over the past two months a number of the most popular Chinese VPNs have been shut down, while two popular sites hosting foreign television shows and movies were wiped clean.

 The shift — which could affect a swath of users from researchers to businesses — suggests that China is increasingly worried about the power of the internet, experts said.

“It does appear the crackdown is becoming more intense, but the internet is also more powerful than it has ever been,” said Emily Parker, author of “Now I Know Who My Comrades Are,” a book about the power of the internet in China, Cuba, and Russia. “Beijing’s crackdown on the internet is commensurate with the power of the internet in China.”

China still has not clamped down to its full ability, the experts said, and in many cases the cat-and-mouse game continues. One day after Apple’s move last week, people on Chinese social media began circulating a way to gain access to those tools that was so easy that even a non-techie could use it. (It involved registering a person’s app store to another country where VPN apps were still available.)

Still, Thursday’s test demonstrates that China wants the ability to change the game in favor of the cat.

A number of Chinese internet service providers said on their social media accounts, websites, or in emails on Thursday that Chinese security officials would test a new way to find the internet addresses of services hosting or using illegal content. Once found, these companies said, the authorities would ask internet service providers to tell their clients to stop. If the clients persisted, they said, the service providers and Chinese officials would cut their connection in a matter of minutes.

The Ministry of Public Security did not respond to a faxed request for comment.

Studies suggest that anywhere from tens of millions to well over a hundred million Chinese people use VPNs and other types of software to get around the Great Firewall. While the blocks on foreign television shows and pornography ward off many people, they often pose only minor challenges to China’s huge population of web-savvy internet users.

China’s president, Xi Jinping, has presided over years of new internet controls, but he has also singled out technology and the internet as critical to China’s future economic development. As cyberspace has become more central to everything that happens in China, government controls have evolved.

It is difficult to figure out the extent of the new efforts, since many users and businesses will not discuss them publicly for fear of getting on the bad side with the Chinese government. But some frequent users said that getting around the restrictions had become increasingly difficult.

One student, who has been studying in the United States and was back in China for summer vacation, said that her local VPN was blocked. She said she had taken the period as a sort of meditation away from social media and left a note on Facebook to warn her friends why she was a “gone girl.”

A doctoral student in environmental engineering in at a university in China said it had become harder to do research without Google, though his university had found alternative publications so that students did not always need the internet. He has since found a new way to get around the Great Firewall, the student said, without disclosing what it was.

Close observers of the Chinese internet said some VPNs still work — and that China could still do a lot more to intensify its crackdown.

“We do think that if the government has decided to do so, it could have shut down much more VPN usage right now,” said a spokesman for VPNDada, a website created in 2015 to help Chinese users find VPNs that work.

“If the government had sent more cats, the mice would have a tougher time,” said the spokesman, who declined to be named because of sensitivities around the group’s work in China. “I guess they didn’t do so because they need to give some air for people or businesses to breathe.”

China’s online crackdowns are often cyclical. The current climate is in part the result of the lead-up to a key Chinese Communist Party meeting, the 19th Party Congress this autumn. Five years ago, ahead of a similarmeeting, VPNs were hit by then-unprecedented disruptions.

Much like economic policy or foreign affairs, censorship in China is part of a complicated and often imperfect political process. Government ministries feel pressure ahead of the party congress to show they are effective or can step in if a problem appears, analysts said.