news

The Pros and Cons of Introducing New gTLDs

By Jonathan Zhang

Every time new concepts are introduced, much debate ensues as to the advantages and disadvantages such a change would bring forth. We’ve seen that happen with the launch of IPv6. Detractors and supporters rallied to make their respective arguments heard.

One thing is sure though. The need for a much larger IP address space is something both parties are in agreement with. In the past 10 years alone, the number of Internet users has grown almost fourfold from 1.7 million in June 2009 to 4.4 million as of June 2019. And if a researcher’s calculations are right, as many as 380 websites are created per minute. An increasing number of start-ups are also established over time that need to make their own mark on the World Wide Web.

Given the constantly rising volume of businesses, it isn’t surprising for much-sought-after domains to become harder to come by. Every company, after all, would go for a domain that aptly describes their business and matches their brand so they would be easy to find in the ever-growing global community that is the Internet. The seeming lack of domain choices has led to the proposal to widen the top-level domain (TLD) space.

And so in 2015, the Internet Corporation for Assigned Names and Numbers (ICANN) announced the introduction of more than 500 new generic TLDs (gTLDs) to accommodate the growing demand. Of course, this spurred talks about the good and bad that this change would bring about. Let’s take a closer look at both sides of the coin.

The Good

The availability of new gTLDs provides entrepreneurs with more domain name options to choose from. Companies in need of easy-to-remember domains for their websites will no longer be limited to using the more commonly used and likely saturated gTLDs (.com, .net, .org, etc.). With the addition of hundreds of gTLDs to choose from, they would stand a better chance of obtaining ownership rights to a domain that would best fit their brand.

Domainers and domain registrars whose main task is to provide clients with lists of potential domains for their business would be able to give more choices apart from what may be left available in the popular gTLD and even country code TLD (ccTLD) spaces. This can, of course, result in better customer satisfaction.

The Bad

It’s no secret, everyone approaches anything new with a bit of caution. That said, because newly created gTLDs are not so known, site visitors, especially those that have had run-ins with cybercriminals, may be wary of visiting sites that sport them. It is, after all, known that cyber attackers often hide their trails by using less popular TLDs.

Cybercriminals and attackers may have gained a bigger playing field as well. Their domain choices, much like the rest of the world’s, increased. Cybersecurity specialists and law enforcement agencies will need to scour a much bigger base when going after threat actors.

Given the bigger volume of TLDs to monitor, website owners and brand agents would also have to spend more time and exert greater effort to keep tabs on potential cases of copyright infringement and trademark abuse.

Takeaways

Just as connectivity can be considered a double-edged sword, the Internet’s growth presents both risks and opportunities as well. But because change is constant, anyone with an online presence, whether an individual or a company, just needs to remain ever-vigilant to threats in order to stay safe. We can only expect to see the World Wide Web expand more, bringing with it both the good and the bad. We just need to be prepared with not just reactive but also proactive measures to maintain the security of our digital assets.

Written by Jonathan Zhang, Founder and CEO of Threat Intelligence Platform

Follow CircleID on Twitter

More under: Cybersecurity, Domain Names, Registry Services, New TLDs

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The Promise of Multi-Signer DNSSEC

By Jan Vcelak

DNSSEC is increasingly adopted by organizations to protect DNS data and prevent DNS attacks like DNS spoofing and DNS cache poisoning. At the same time, more DNS deployments are using proprietary DNS features like geo-routing or load balancing, which require special configuration to support using DNSSEC.

When these requirements intersect with multiple DNS providers, the system breaks down. DNSSEC cannot currently work with two or more providers if those providers offer proprietary DNS features. In this article, we’ll explain why this happens and present an innovative technical solution that was recently adopted in an RFC draft and is under evaluation by the DNS operations working group in the IETF. We will show how NS1 implements this solution and describe another way that organizations can achieve DNS redundancy with DNSSEC.

The Problem of Multi-Signer DNSSEC

DNSSEC is a set of extensions that improve the original DNS protocol to make it more secure. Its main objective is to allow DNS clients to verify that they are receiving correct DNS information and not fake information injected by attackers.

DNSSEC defines new types of DNS records, which hold cryptographic signatures of DNS data and share a public key that allows verification of the data. The signatures are a proof that the data has not been tampered with and are authentic because the private key that was used to create the signatures is held only by the DNS zone owner.

The problem begins when organizations have three requirements, all of which are quite common in modern DNS deployments:

  1. DNSSEC – they want to secure DNS communication using the DNSSEC protocol.
  2. Multi provider – they want to run DNS with more than one provider at the same time. This is commonly used to setup redundant DNS, ensuring services remain available even if one DNS provider fails.
  3. Advanced and Proprietary DNS features – most DNS providers today offer capabilities that go beyond the standard DNS protocol in order to route traffic based on rules or conditions such as resource availability or geo-routing that can route users, via DNS, to a server near them, or Global Server Load Balancing to route users between several servers. See for example NS1’s DNS traffic steering capabilities. Since these capabilities extend standard DNS, many of these advanced features are implemented in proprietary ways.

Using current DNS infrastructure, if you meet requirements #2 and #3, DNSSEC will simply not work. Let’s understand why.

In traditional DNS, all records are static. The zone file is signed with DNSSEC and distributed to DNS providers (in case you use more than one). All providers serve the records from the same file. Every client who sends a query for a record gets the same answer, regardless of which DNS provider that client is communicating with.

However, when we introduce requirement #3, proprietary DNS features, DNS records are no longer static. The DNS answer might change for a specific query. For example, you might want to provide a different DNS response depending on the geographical location of the user, the server you want to route the user to, performance considerations, etc.

Each DNS provider that has proprietary DNS features has an internal method for making DNSSEC work with their traffic management features. For example, NS1 signs each individual response on-the-fly when generating the response (this is called DNSSEC online signing).

Those proprietary DNSSEC implementations are quite different between providers. It is no longer possible to provide one zone file, sign it one time and distribute it between providers. Each provider generates tailored DNS responses which cannot be easily pre-signed with a single DNSSEC key.

A Strategy for Solving the Multi-Signer DNSSEC Problem

A solution to this problem has been proposed in a recent IETF draft, co-authored by NS1’s Jan Včelák. The solution is straightforward but requires some background to understand, let’s go through it step by step.

A Bit of Background: KSK and ZSK

Let’s start by defining two important concepts:

  • The Key Signing Key (KSK) is the key used to sign and therefore authenticate other DNSSEC keys to sign the zone content. The private part of the key is kept by the zone owner and the public part of the key is published in the DNS. The key is also referred to from a parent zone which establishes a secure delegation between the parent and the zone.
  • The Zone Signing Key (ZSK) is the key used to sign all records in the zone, except for the DNSKEY record which is signed by KSK.

Sharing the ZSK Between Providers

The proposed strategy for multi-signer DNS is that each DNS provider should use a separate zone signing key for the records they serve, but all providers have to agree on the total set of DNSSEC keys being used, which includes all of the KSK and ZSK. Therefore each provider has to import the public keys of every other provider.

Why would one DNS provider need the public keys of the other providers?

Take a domain, example.com, with two DNS providers A and B and with each provider using a separate KSK and ZSK. There is a secure delegation from the parent zone (“.com”), which contains signed DS records pointing to both providers’ KSK.

Now the DNS resolver has to fetch the DNSKEY record for the zone which contains the DNSSEC keys to be used for validation. If it chooses to talk to provider A, the resolver obtains the DNSKEY, validates the response, and then caches it. This is illustrated below.

At a later point in time, the resolver might query another record in that zone, but now it talks to provider B’s name servers. It gets a response, but that response is signed by B’s ZSK which is not present in the cached DNSKEY record received from A. This is illustrated by provider B returning an answer signed by the orange and purple keys.

That’s why provider A’s DNS response needs to include the ZSK for provider B, and vice versa. Every provider has to import public keys of every other provider. This is the basis for the multi-signer DNSSEC solution.

Two Models for Making Multi-Signer DNSSEC Work

We’ve presented the basic principle that makes multi-signer DNSSEC work — that each provider needs to import and provide to its users the ZSKs of all the other providers. This ensures that the next time a user makes a query, they can still validate their DNSSEC data even if they reach another provider. There are two models for making this happen.

Model 1: One Zone Owner and One KSK

Who is it for?

Model 1 uses a single KSK managed by one of the providers or the zone owner. This model is suitable for organizations that require a better control of the KSK and want to manage all signing keys for the zone themselves.

How it works

Each of the providers, A and B, has its own set of zone signing keys (ZSK). The zone owner retrieves the public keys from the providers, builds the DNSKEY record set which contains the public KSK and public ZSKs of the providers, signs it using the private KSK, and provides the resulting DNSKEY record set along with the signature to the two DNS providers.

Source: DNS OARC Presentation

The above diagram illustrates that the DNS record set is always served with the same signature, generated in advance by the zone owner. But any other content in the zone is signed by the ZSKs held by the different providers.

Because each DNS provider has the same DNSKEY record set, even if the resolver caches a response from one provider, they have all public keys needed to validate responses sent by the other provider.

Model 2: Shared Trust, Two KSKs Distributed to Two DNS Providers

Who is it for?

Under model 2, each provider uses independent KSK and ZSK. This model is suitable for organizations that do not require tight control of the KSK and instead require a solution with full redundancy.

How it works

Each provider has their own ZSK and KSK. They independently reach out to the other provider, get the public keys that provider is using, and add their own public keys. As a result, they all end up with the same DNSKEY record set which is signed by their own KSK. The DNSKEY record and the signatures are then added into the zone.

In this setup, the parent zone contains DS record referring to KSK of each provider. No matter what provider the DNS resolver selects to get any zone record, it will always be able to validate their authenticity because both KSKs are trusted and the DNSKEY record set is the same at both providers.

Multi-Signer DNSSEC Status at NS1

At this stage, NS1 has working prototype implementation of the interface required to support Model 1: Our REST API enables to retrieve public keys we use for signing and also allows publishing the final DNSKEY record set and its signatures. At the same time, we are building an open-source component that allows you to run NS1 and any common open-source DNS server (for example BIND) in the multi-signer DNSSEC configuration.

NS1 is currently working with other DNS providers to implement the same interface, which will also eventually enable running the Model 2, which has the benefit of full DNS provider redundancy.

While we are talking to different providers to enable Model 2, you can achieve the same results solely leveraging the NS1 Domain Security Suite.

Domain Security Suite

NS1 Domain Security Suite Includes:

  • A fully managed, single tenant, globally anycasted DNS network dedicated to your zones
  • A second, redundant DNS network hosted with a third party vendor on hardware, IPs, and ASNs that are physically and logically separate from the NS1 Managed DNS network
  • Support for full traffic management and DNSSEC on both networks
  • Full use of NS1’s suite of advanced traffic steering capabilities on both DNSSEC-protected DNS networks
  • Single pane of glass management

Written by Jan Vcelak, Lead Software Engineer at NS1

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Reliance Jio Plans Major NB-IoT Launch in India in Jan. 2020

By Dan Jones The operator plans to have at least a billion IoT devices operating on the network within 2 years.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

The 2019 IPv4 Market: Mid-Year Report

After a slow start to 2019, the volume of IPv4 numbers traded is picking up — though still far below the peak trading periods of 2018. By this same time last year, the total quantity of numbers flowing to and from organizations in the ARIN region was just over 27 million. But 2018 was the most active year ever in the IPv4 market. This year is not shaping up to be as active. In 2019 (through July), just over 17.5 million numbers have transferred — representing a 35% decline from last year over the same time period.

The high volumes in 2018 were the result of an increased supply of large blocks entering the market. Between 2017 and 2018, there were double the number of transactions and a more than 15% increase in volume of IPv4 addresses sold in the large block market, most of which occurred in Q3 2018, when the second highest quantity of IPv4 numbers were traded in any quarterly period. The two quarters that followed, however, were the quietest in the history of the market as a result of limited supply rather than constrained demand. There were no large block transfers during this period.

The large block scarcity in Q4 2018 and Q1 2019 pushed prices up considerably. These rising prices shook loose some additional large block supply and produced a handful of large block transactions in Q2 of this year.

Although the volume of numbers traded has declined from last year, the total number of transactions is still trending upward, as it has year after year. This upward trend is attributable to continuing growth in small blocks transactions. In the first two quarters of 2019, over 75% of transactions involved trades of fewer than 4,000 IPv4 numbers. This reflects growth of 6% compared to the first half of 2018.

To date, the 2019 inter-RIR market has had no large block transactions, but there have been a steady stream of small and medium block trades. Also, there has been big news in the international market. LACNIC recently ratified a policy that will permit inter-RIR transactions. And there is an inter-RIR transfer policy proposal under consideration in AFRINIC.

Market Consolidation for /17+ Blocks

The current IPv4 market for /17 and larger blocks is consolidating around the trading activity of just a few buyers. In 2016, for example, there were approximately 30 buyers of nearly 80 /16 blocks traded; 95% of those blocks were sold outside of large block transactions to small and mid-block buyers (i.e., buyers purchasing fewer than 1MM numbers). Since then, the number of /16 blocks entering the market has increased – in the first half of 2019, over 100 /16s were sold – but the number of buyers and percentage of blocks traded outside of large block transactions has declined substantially. There were only 9 buyers altogether with less than 10% of the blocks sold to buyers picking up fewer than three /16s.

This same consolidation trend pervades the entire market for /17 and larger blocks. Since 2016, seller diversity (i.e., measured as the total number of sellers compared to the total number of transactions) remains high as sellers continue to stream into the market. Buyer diversity, however, has steadily decreased. See Table 1.

Block Prices Continue to Increase

Demand for address space remains high, and supplies are constrained. These factors are exerting upward pricing pressure. But at the same time, sophisticated buyers are looking for ways to use their leverage to relieve that pressure. In this climate, sellers need real-time pricing intelligence, effective bid processes, and experienced transaction guidance to help ensure they are closing deals that maximize the value of their address space.

IPv6 Deployment Picking Up … A Bit … in 2019

Worldwide end user adoption hit an all-time high of nearly 29% in June 2019, according to Google IPv6 statistics. See https://www.google.com/intl/en/ipv6/statistics.html. This represented a nearly 3 percentage point increase since January. This is slightly better than the rate of progress made during the same time period last year, but in line with global adoption rates in prior years.

By the end of Q2 2019, global user connectivity ranged between 25% (on weekdays) to around 29% (on weekends). Over the last two months in Q2, there was some upward progress in the U.S., but the adoption rates in the U.S. remains a few percentage points shy of its peak in late 2018 when adoption hit 40%.

There continues to be little progress in the number of websites reachable over IPv6. According to Alexa Top 1000 statistics, at the end of July, 25% of websites were reachable over v6, which reflects no improvement over the last two years.

As in the past, there is no evidence that IPv6 is replacing IPv4 as the dominant protocol for Internet routing or that the migration to IPv6 has had any material impact on the IPv4 market. Based on the current status of IPv6 adoption, we expect nothing to change in this regard for the remainder of 2019.

Read more here:: www.circleid.com/rss/topics/ipv6

‘Smart’ Ovens May Turn On and Preheat Themselves Overnight, Which Is Totally Safe

By Joel Hruska

The June Oven is part of a new wave of kitchen gadgets promising to combine modern Silicon Valley technology with cutting-edge design. On paper, these products promise to deliver a new wave of efficient, simple device interaction. In reality, they often come with fine print attached. In the June’s case, the fine print may involve a tendency to turn on and preheat itself overnight.

Multiple June owners have complained about this happening to them when they were sleeping,

June has a problem here, whether the company wants to acknowledge it or not. Obviously it matters if the company’s oven has a flaw causing it to active and preheat without anyone ordering it to do so. But it matters just as much if customers are inadvertently performing this action without intending it. Unattended cooking accounts for a significant percentage of total house fires.

Smart Products Have a Knowledge Problem

Up until now, an oven has been an appliance that you started while you were standing in front of it. While it’s always been a good idea to keep flammable things away from an oven, every single one of us has, at one time or another, left something flammable near a stove. You’ve probably done so deliberately, especially if you’ve ever been dealing with a sudden rush of company or were short on counter space for food prep. The rule for managing the risk of an oven fire is to check if the oven is on before putting flammable things near it.

An oven that can turn itself on remotely is a different risk than an oven that can’t. There are many steps that June can (and possibly has) taken to reduce the potential threat, including building a good oven that isn’t overly prone to external hot spots. At the same time, however, it’s an oven — it’s going to have hot spots by definition. A human standing in front of the oven would automatically clear the area for any debris that might have built up around it. The oven does not “know” that it needs to perform this function. And people can die when computers make mistakes about what they know. Autonomous vehicles drive into stationary objects. Aircraft drive themselves into the ground, resisting every effort their pilots make to pull their noses skyward.

One important distinction between various autonomous vehicle problems or the 737 Max’s MCAS system, of course, is that the June Oven may not be doing this because of some baked-in AI capability. But this is less important than it might seem. What Matt Van Horn calls “user error,” I would call something else: Bad app design. And since June develops both its app and its oven, the responsibility for the issue lands in the same place.

If the problem is that end-users are mistakenly triggering the “Preheat” function in the app, the app needs to be designed in a manner that makes it much more difficult to tell the oven to preheat without being aware of doing so. It should not be possible to accidentally turn on the oven while looking through the app’s recipe book. June will distribute an app update in September that allows consumers to disable the remote preheat functionality, but allowing it will still be the default. Next year, the June Oven will be updated to recognize whether there is food in the device and will turn off after a set period of time if the end-user does not flag the oven to stay on.

The point in comparing the June Oven situation to the situation with autonomous cars or the 737 Max is not to pretend they are equivalent. It’s to highlight how integrating new capabilities into products requires manufacturers to think about how humans use them. A product that has the capability to upend common assumptions about how an appliance works needs to take particular care to guard against any risk of harm the change creates. Adding a little intelligence to a washer or dryer doesn’t increase the risk of harm, but anything that generates enough heat to potentially start a fire needs to be treated with care. The June’s growing pains are a small example of how companies and consumers are both going to need to adjust how they think about products if they want to change the ‘defaults’ people are used to living with.

The June doesn’t appear to be a very well-rated product in the first place — it’s a $600 toaster oven and the Wirecutter found its cooking subpar in comparison with the Cuisinart TOB-260N1. As added bonuses, the Cuisinart lacks Wi-Fi, has no integrated camera, and doesn’t appear to offer a recipe app that costs ~$50 per year to subscribe to.

Now Read:

Read more here:: www.extremetech.com/feed

%d bloggers like this: