do i need ipv6

Startup Aims to Predict Fires Before They Start

By George Leopold

A predictive analytics startup armed with a patented learning algorithm aimed at security applications along with Internet of Things devices said it has attracted seed funding for a platform that could spot the precursors of impending fires and floods before they start.

OneEvent Technologies said this week it has so far raised $4.3 million to commercialize its predictive learning and analytics engine for building monitoring and security. The cloud-based platform—the IoT version of a smoke alarm—uses wireless sensors to measure factors such as temperature, air quality and humidity. The engine eventually learns what is “normal” for a given structure and issues alerts when it detects an abnormal reading that might indicate fire or flood.

Company founders Dan Parent and Kurt Wedig said a TV segment showing hotel occupants crawling down a smoke-filled hallway, searching for an exit, inspired them. Their idea was spurred by the realization that smoke detectors and fire alarms did little to prevent the fire.

Founded in 2014, the startup based in Mount Horeb, Wis., holds eight U.S. patents on its software platform. The startup is currently testing the predictive alarm system with local fire departments and other agencies using controlled burns to determine how far in advance the OnePrevent system can predict trouble.

During testing at the safety certifier UL (formerly Underwriters Laboratories), OneEvent said signs of a fire were detected by its system up to 20 minutes before smoke alarms sounded.

The predictive learning and analytics engine can, for example, be trained to detect rising temperatures in a kitchen or increasing moisture from a leaking pipe. Each data point collected by wireless sensors can be processed via the OneEvent algorithm, alerting a building manager or homeowner via a smart app on a mobile phone or tablet. “As opportunity in IoT and building monitoring grows, there’s a potential to create solutions that can do more than just alert people to danger as it happens or after the fact,” OneEvent CEO Wedig asserted.

The startup notes that its predictive-alert system is neither a fire nor burglar alarm. Rather, it is positioning the platform as “supplementary protection that empowers users with data and anticipated warnings via a cloud based platform and app.”

Along with first responders and homeowners, the analytics engine also is being pitched to property and casualty insurers, allowing them to “look back in time” to determine whether an insured property was protected by working sensors.

Along with predictive capabilities, embedded sensors also could be used by first responders track the progress of a fire, generating data for investigators and claims adjusters on the cause of a fire.

Recent items:

Can the Internet of Things Help Us Avoid Disasters?

Big Algorithms to Change the World

The post Startup Aims to Predict Fires Before They Start appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Startup Aims to Predict Fires Before They Start

By News Aggregator

By George Leopold

A predictive analytics startup armed with a patented learning algorithm aimed at security applications along with Internet of Things devices said it has attracted seed funding for a platform that could spot the precursors of impending fires and floods before they start.

OneEvent Technologies said this week it has so far raised $4.3 million to commercialize its predictive learning and analytics engine for building monitoring and security. The cloud-based platform—the IoT version of a smoke alarm—uses wireless sensors to measure factors such as temperature, air quality and humidity. The engine eventually learns what is “normal” for a given structure and issues alerts when it detects an abnormal reading that might indicate fire or flood.

Company founders Dan Parent and Kurt Wedig said a TV segment showing hotel occupants crawling down a smoke-filled hallway, searching for an exit, inspired them. Their idea was spurred by the realization that smoke detectors and fire alarms did little to prevent the fire.

Founded in 2014, the startup based in Mount Horeb, Wis., holds eight U.S. patents on its software platform. The startup is currently testing the predictive alarm system with local fire departments and other agencies using controlled burns to determine how far in advance the OnePrevent system can predict trouble.

During testing at the safety certifier UL (formerly Underwriters Laboratories), OneEvent said signs of a fire were detected by its system up to 20 minutes before smoke alarms sounded.

The predictive learning and analytics engine can, for example, be trained to detect rising temperatures in a kitchen or increasing moisture from a leaking pipe. Each data point collected by wireless sensors can be processed via the OneEvent algorithm, alerting a building manager or homeowner via a smart app on a mobile phone or tablet. “As opportunity in IoT and building monitoring grows, there’s a potential to create solutions that can do more than just alert people to danger as it happens or after the fact,” OneEvent CEO Wedig asserted.

The startup notes that its predictive-alert system is neither a fire nor burglar alarm. Rather, it is positioning the platform as “supplementary protection that empowers users with data and anticipated warnings via a cloud based platform and app.”

Along with first responders and homeowners, the analytics engine also is being pitched to property and casualty insurers, allowing them to “look back in time” to determine whether an insured property was protected by working sensors.

Along with predictive capabilities, embedded sensors also could be used by first responders track the progress of a fire, generating data for investigators and claims adjusters on the cause of a fire.

Recent items:

Can the Internet of Things Help Us Avoid Disasters?

Big Algorithms to Change the World

The post Startup Aims to Predict Fires Before They Start appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post Startup Aims to Predict Fires Before They Start appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

The Three Pillars of ICANN’s Technical Engagement Strategy

ICANN’s technical engagement team was established two years ago. Since then, we have made a great deal of progress in better engaging with our peers throughout the Internet Assigned Numbers Authority (IANA) stewardship transition proposal process and currently during the implementation phase. Over the past few months, the Office of the CTO has been reinforced with a dedicated research team composed of experienced Internet technologists. These experts are working hard to raise the level of ICANN engagement on Internet identifiers technology usage measurement, its evolution, and are collecting and sharing data that can further support the community in its policy development processes. They are also focusing on helping to build bridges with other relevant technical partners.

Our overall strategy for technical engagement is based on three pillars:

  • Continue building trust with our technical partners and peers within the ecosystem.
  • Expand our participation in relevant forums and events where we can further raise awareness about ICANN’s mission, while encouraging more diversity in participation in our community policy development processes.
  • Continue contributing ICANN’s positions on technical topics discussed outside our regular forums, but ones affecting our mission, keeping the focus on our shared responsibilities and effective coordination.

We can highlight in this blog some ongoing activities toward each goal:

Expanding Participation in Technical Forums

To continue building a sustainable relationship with our peers, we have increased, in number and in quality, our participation and contribution to various technical forums led by our partner organizations, including:

  • Internet Engineering Task Force (IETF)
  • Regional Internet Registries (RIRs): African Network Information Center (AFRINIC), Asia-Pacific Network Information Centre (APNIC), American Registry for Internet Numbers (ARIN), Latin American and Caribbean Network Information Centre (LACNIC) and Réseaux IP Européens Network Coordination Centre (RIPE NCC)
  • Regional country code top-level domain organizations: African TLD Organization (AFTLD), Council of European National TLD Registries (CENTR), Asia Pacific TLD Organization (APTLD), Latin American and Carribean TLD Organization (LACTLD)
  • And many others …

Encouraging Diversity of Participants

As a community, we face the challenge of strengthening the bottom-up, multistakeholder policy development process, while at the same time ensuring that participation becomes more diverse. Looking beyond regional and gender diversity, we must also achieve technical diversity. For example, when we work on domain name policies that affect online services, how do we ensure that we have Internet service operators, application developers and software designers around the table to give their operational perspectives? And as mobile technology becomes an increasingly prevalent way of consuming Internet services, and mobile operators are important players in that sector, how do we ensure that they engage with and contribute to our policy development processes?

We have also seen a growing interest from the Internet services abuse mitigation community in understanding and engaging more actively in our community-led policy development processes. As a result, the output of these processes is taking their needs into consideration. Our Security, Stability and Resiliency (SSR) and Global Stakeholder Engagement (GSE) teams have worked together to provide capability-building programs dedicated to this community. We are exploring ways to cover more ground (particularly in emerging regions). Our recent participation in the Governmental Advisory Committee (GAC) Public Safety Working Group’s workshop in Nairobi has confirmed this need. A follow-up mechanism is under discussion to make sure our engagement efforts meet these needs.

Engaging in Technical Topics that Affect Our Ecosystem

Finally, within our technical scope, we have launched an Internet Protocol version 6 (IPv6) initiative to refine ICANN’s position on IPv6. The initiative defines actions that will ensure that, as organization, we do our part to provide online services that our community can transparently access over both IPv6 and Internet Protocol version 4 (IPv4). Read more about our IPv6 initiative.

Read more here:: www.icann.org/news/blog.rss

Homeland Security Invests $1M in five IoT Security Startups

The Department of Homeland Security (DHS) Science and Technology Directorate (S&T) announced their $1M investment in five IoT-security startups. The startups being funded are Factom, Whitescope, M2Mi, Ionic Security, and Pulzze Systems.

DHS aims to improve situational awareness of security within the Internet of Things by funding these startups. The announcement was made on Jan 21, 2017.

The five IoT security startups, selected through its ‘Securing the Internet of Things and Silicon Valley Innovation Program’, will produce and demonstrate a pilot-ready prototype to qualify for the third phase of the program.

Major focus of each of funded startup is as follows:

Atlanta-based Ionic Security received approximately $200K to develop a distributed data protection model. It will solve authentication, detection and confidentiality challenges that impact distributed IoT devices. Inonic’s total total equity funding stands at $122.44M in 7 Rounds from 22 Investors. Amazon also participated in Ionic’s Series D $45M funding.

Factom Harmony

Austin-TX based Factom received $199K by DHS to deliver solutions related to quality control, due diligence, and auditing by leveraging the blockchain. It will help prevent spoofing and ensure data integrity. The Austin-based startup has also secured $6.49M in 5 Rounds from 4 Investors.

California-based M2Mi received $200K to deploy open source version of the SPECK cryptographic protocol. It will help run a light weight crypto package on IoT devices.

Another California-based startup Whitescope LLC received $200K to build a working prototype of a secure wireless communications gateway for IoT devices.

California-based Pulzze Systems will improve infrastructure visibility problem by providing dynamic detection as components connect or disconnect from a networked system. It also received $200K in funding by DHS.

Read more here:: feeds.feedburner.com/iot

MapR Extends Its Platform to the Edge

By News Aggregator

By Alex Woodie

MapR Technologies today unveiled MapR Edge, an extension of its converged data platform that lets customers install MapR nodes practically anywhere they want.

The new offering runs on small portable PCs like the Intel NUC (pictured above), and delivers the full breadth of MapR’s capabilities–including Hadoop, NoSQL, and data streaming functionality—anywhere customers want, from autonomous cars driving rural highways to wellheads in the oil field.

“Things are getting more distributed, not less distributed,” says Jack Norris, MapR‘s senior vice president of data and applications. “The benefits of having processing closer and closer to the data and being able to act faster where the action is happening, is a big driver.”

MapR Edge pushes data collection and processing capabilities further away from the big centralized clusters that so far have largely defined big data platforms like Hadoop, NoSQL databases, and streaming data platforms Kafka. But instead of creating a separate system that must be configured and managed, MapR decided to make it all part of the family.

“This is not a separate standalone product that just has data collection,” Norris tells Datanami. “It’s actually a full extension of the cluster, so [it’s providing] centralized management, centralized security. [It has] the ability to replicate, the ability mirror, the ability to handle occasional connected devices with streams. It’s all built into the MapR Edge.”

The new offering fits into MapR’s strategy to help customers build Internet of Things (IoT) applications. To that end, it serves several functions.

First, it serves as the first waypoint for data right after it’s generated. As raw flows off wellheads or MRI machines, MapR Edge collects it and performs the first round of processing. The customer then has the choice to upload only the aggregated cluster to the core MapR clusters for further analysis or archiving. This can help alleviate both bandwidth and data privacy and security concerns.

But MapR Edge goes beyond that and pushes machine intelligence out into the field. For example, an oil exploration company with thousands of wellheads may have used machine learning algorithms to predict when equipment is about to fail. That signature of equipment failure can be pushed out to the MapR Edge to score streams of live data in real time.

“This whole concept of act locally, learn globally is really what’s driving some of the closed loop processes,” Norris says. “Each individual unit is only seeing the data from that particular wellhead. But when you’ve got thousands of those throughout world and you have data that’s been collected over a period of time, the ability to detect infrequently occurring events — the ability to detect anomalies – is much better understood on global basis.”

As a full-fledged member of the MapR clusters, MapR Edge can run any big data processing engine supported by the Hadoop distributor, including Spark, Drill, Hive, MapReduce, and others. The software can also function as a node of MapR’s NoSQL database, called MapRDB, and also be a node in MapR’s Kafka-compatible stream processing system, called MapR Streams.

MapR Edge can run on the Intel NUC, a miniature PC that’s only 4.5 inches by 4.5 inches in size. The minimum configuration calls for a cluster of three Intel NUCs, each configured with 16GB of RAM and 64GB of solid-state storage. The maximum configuration is a cluster of five MapR Edges, and total of 50 TB of storage.

Related Items:

MapR Embraces Microservices in Big Data Platform

MapR Delivers Bi-Directional Replication with Distro Refresh

The post MapR Extends Its Platform to the Edge appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post MapR Extends Its Platform to the Edge appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

MapR Extends Its Platform to the Edge

By Alex Woodie

MapR Technologies today unveiled MapR Edge, an extension of its converged data platform that lets customers install MapR nodes practically anywhere they want.

The new offering runs on small portable PCs like the Intel NUC (pictured above), and delivers the full breadth of MapR’s capabilities–including Hadoop, NoSQL, and data streaming functionality—anywhere customers want, from autonomous cars driving rural highways to wellheads in the oil field.

“Things are getting more distributed, not less distributed,” says Jack Norris, MapR‘s senior vice president of data and applications. “The benefits of having processing closer and closer to the data and being able to act faster where the action is happening, is a big driver.”

MapR Edge pushes data collection and processing capabilities further away from the big centralized clusters that so far have largely defined big data platforms like Hadoop, NoSQL databases, and streaming data platforms Kafka. But instead of creating a separate system that must be configured and managed, MapR decided to make it all part of the family.

“This is not a separate standalone product that just has data collection,” Norris tells Datanami. “It’s actually a full extension of the cluster, so [it’s providing] centralized management, centralized security. [It has] the ability to replicate, the ability mirror, the ability to handle occasional connected devices with streams. It’s all built into the MapR Edge.”

The new offering fits into MapR’s strategy to help customers build Internet of Things (IoT) applications. To that end, it serves several functions.

First, it serves as the first waypoint for data right after it’s generated. As raw flows off wellheads or MRI machines, MapR Edge collects it and performs the first round of processing. The customer then has the choice to upload only the aggregated cluster to the core MapR clusters for further analysis or archiving. This can help alleviate both bandwidth and data privacy and security concerns.

But MapR Edge goes beyond that and pushes machine intelligence out into the field. For example, an oil exploration company with thousands of wellheads may have used machine learning algorithms to predict when equipment is about to fail. That signature of equipment failure can be pushed out to the MapR Edge to score streams of live data in real time.

“This whole concept of act locally, learn globally is really what’s driving some of the closed loop processes,” Norris says. “Each individual unit is only seeing the data from that particular wellhead. But when you’ve got thousands of those throughout world and you have data that’s been collected over a period of time, the ability to detect infrequently occurring events — the ability to detect anomalies – is much better understood on global basis.”

As a full-fledged member of the MapR clusters, MapR Edge can run any big data processing engine supported by the Hadoop distributor, including Spark, Drill, Hive, MapReduce, and others. The software can also function as a node of MapR’s NoSQL database, called MapRDB, and also be a node in MapR’s Kafka-compatible stream processing system, called MapR Streams.

MapR Edge can run on the Intel NUC, a miniature PC that’s only 4.5 inches by 4.5 inches in size. The minimum configuration calls for a cluster of three Intel NUCs, each configured with 16GB of RAM and 64GB of solid-state storage. The maximum configuration is a cluster of five MapR Edges, and total of 50 TB of storage.

Related Items:

MapR Embraces Microservices in Big Data Platform

MapR Delivers Bi-Directional Replication with Distro Refresh

The post MapR Extends Its Platform to the Edge appeared first on Datanami.

Read more here:: www.datanami.com/feed/

A Conversation with Community Leader Lise Fuhr

Lise Fuhr is a leader in the Internet community in Denmark. Here, she reflects on what ICANN58 means for Denmark – and what are the key issues she will focus on at the meeting.

Tell us a little about yourself and your involvement in ICANN.

I’m currently Director General at the European Telecommunications Network Operators’ Association (ETNO), the association that includes Europe’s leading providers of telecommunication and digital services. In ICANN, ETNO is an active in the Internet Service Providers and Connectivity Providers (ISPCP) and the Business Constituency (BC).

I’ve had several roles in the ICANN community, as a member of the Second Accountability and Transparency Review Team (ATRT2) and as co-chair of the Cross-Community Working Group that developed the proposal for the Internet Assigned Names Authority (IANA) stewardship transition. At present, I am a Board member of the ICANN affiliate Public Technical Identifiers (PTI), which is responsible for the operation of the IANA functions.

In the past, I was COO of Danish registry DIFO and DK Hostmaster, the entities responsible for the country code top-level domain (ccTLD) .dk. I have also worked for the Danish Ministry of Science, Technology and Innovation and for Telia Networks.

ICANN is all about the multistakeholder model. We actively seek participation from diverse cross-sections of society. From your perspective, what does the multistakeholder model of governance mean for the Denmark?

Having ICANN58 in Copenhagen will help build an even stronger awareness of the role of Internet governance and of the multistakeholder model in Denmark. Today’s Internet ecosystem is broad – most societal and industrial sectors rely on the Internet. Almost every sector needs to take part in how the Internet is governed.

What relationship do you see between ICANN and its stakeholders and how would you like to see it evolve?

ETNO has always advocated for an active role in Internet governance. For this reason, we support the multistakeholder model, embodied by ICANN and its activities. We want to support ICANN as it takes its first steps after the transition. The multistakeholder model is an opportunity to bring positive values to the global Internet community. Freedom to invest and freedom to innovate both remain crucial to a thriving and diverse Internet environment.

What issues will you be following at ICANN58?

The discussion around the new generic top-level domains (gTLDs) will be very important. The program should be balanced and consider both the opportunities and the risks to be addressed. In addition, the work on enhancing ICANN’s accountability will also be essential to rounding out the good work done so far with the transition. Another important issue is the debate on the migration from Internet Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6). Last but not least, trust is a top priority, so it’s important to participate in the discussions around security.

Read more here:: www.icann.org/news/blog.rss

Join the VIP Club

By John Sweeting

For IPv6 block holders

Have you been delaying your IPv6 deployment because you don’t have a portable IPv4 block from ARIN? If so, I have some very good news for you. Once you register an IPv6 block, you can immediately qualify to get a portable IPv4 block from ARIN to help you deploy IPv6. Read on for the details of this very important policy, exclusively for use in transitioning to IPv6.

But I thought ARIN ran out of IPv4 addresses?

Yes, ARIN did reach full IPv4 depletion in September 2015. That was the point at which our IPv4 free pool – meaning IPv4 addresses available under our standard policies – reached zero. Happily, years prior to IPv4 depletion, the ARIN community reserved IPv4 address space to be issued for specific purposes well after IPv4 depletion. One of those purposes is to assist networks with deployment of IPv6. This special reserve of IPv4 addresses is particularly useful for organizations that do not already have IPv4 space from ARIN, but it can be used by any organization that’s deploying IPv6 and meets the requirements of the policy.

I can get IPv4? Tell me more…

At the time the IANA free pool was depleted in February 2011, an entire IPv4 /10 (equivalent to 16,384 /24s) was set aside and earmarked to facilitate IPv6 deployment. The policy that created this special reserve is listed in our policy manual (NRPM) under section 4.10. Almost all of that /10 still remains in the reserved pool and is available for you to request. To get your first IPv4 block from this reserve (typically a /24), you’ll need to meet a few basic requirements:

  • Use the block to immediately assist your IPv6 deployment (for example, to dual stack or to implement translation technologies like NAT-PT and NAT464)
  • Show you do not have any IPv4 allocations/assignments that can meet this need

You can then get up to one /24 every six months for that usage. When requesting additional space under the policy, you’ll need to show that you’re still using all previous space you received under this policy to assist with your IPv6 deployment. There is no requirement you return the space in the future. Make sure to review the policy text via the link above for more specific details on the requirements.

Great! How do I get it?

To get your IPv4 block, all you need to do is submit a request via ARIN Online as detailed on our Request Resources Page. We’ll ask you to provide some basic details on your existing IPv4 usage and how the requested IPv4 block will be used to facilitate your IPv6 deployment. If you don’t have your IPv6 address space yet, you’ll want to register that first before you submit your request for IPv4 space. You’ll then work with one of our analysts to verify you meet the requirements and get your IPv4 block.

If you have any questions, please submit an Ask ARIN request via your ARIN Online account and our team will make sure you can take advantage of this very useful policy to get your IPv4 block.

The post Join the VIP Club appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

Join the VIP Club

By News Aggregator

By John Sweeting

For IPv6 block holders

Have you been delaying your IPv6 deployment because you don’t have a portable IPv4 block from ARIN? If so, I have some very good news for you. Once you register an IPv6 block, you can immediately qualify to get a portable IPv4 block from ARIN to help you deploy IPv6. Read on for the details of this very important policy, exclusively for use in transitioning to IPv6.

But I thought ARIN ran out of IPv4 addresses?

Yes, ARIN did reach full IPv4 depletion in September 2015. That was the point at which our IPv4 free pool – meaning IPv4 addresses available under our standard policies – reached zero. Happily, years prior to IPv4 depletion, the ARIN community reserved IPv4 address space to be issued for specific purposes well after IPv4 depletion. One of those purposes is to assist networks with deployment of IPv6. This special reserve of IPv4 addresses is particularly useful for organizations that do not already have IPv4 space from ARIN, but it can be used by any organization that’s deploying IPv6 and meets the requirements of the policy.

I can get IPv4? Tell me more…

At the time the IANA free pool was depleted in February 2011, an entire IPv4 /10 (equivalent to 16,384 /24s) was set aside and earmarked to facilitate IPv6 deployment. The policy that created this special reserve is listed in our policy manual (NRPM) under section 4.10. Almost all of that /10 still remains in the reserved pool and is available for you to request. To get your first IPv4 block from this reserve (typically a /24), you’ll need to meet a few basic requirements:

  • Use the block to immediately assist your IPv6 deployment (for example, to dual stack or to implement translation technologies like NAT-PT and NAT464)
  • Show you do not have any IPv4 allocations/assignments that can meet this need

You can then get up to one /24 every six months for that usage. When requesting additional space under the policy, you’ll need to show that you’re still using all previous space you received under this policy to assist with your IPv6 deployment. There is no requirement you return the space in the future. Make sure to review the policy text via the link above for more specific details on the requirements.

Great! How do I get it?

To get your IPv4 block, all you need to do is submit a request via ARIN Online as detailed on our Request Resources Page. We’ll ask you to provide some basic details on your existing IPv4 usage and how the requested IPv4 block will be used to facilitate your IPv6 deployment. If you don’t have your IPv6 address space yet, you’ll want to register that first before you submit your request for IPv4 space. You’ll then work with one of our analysts to verify you meet the requirements and get your IPv4 block.

If you have any questions, please submit an Ask ARIN request via your ARIN Online account and our team will make sure you can take advantage of this very useful policy to get your IPv4 block.

The post Join the VIP Club appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

The post Join the VIP Club appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

And the Wait Continues for .corp, .home and .mail Applicants

By News Aggregator

By Avri Doria

On 6 March 2017, ICANN’s GDD finally responded to an applicant letter written on 14 August 2016 to the ICANN Board. This was not a response from the ICANN Board to the letter from 2016 but a response from ICANN staff. The content of this letter can best be described as a Null Response. It reminded the applicants that the Board had put the names on hold and was still thinking about what to do. After 6 months of silence from the ICANN Board, the GDD staff reminds the applicants that they have not yet gotten a response and that the “the topic of name collision continues to be considered by the ICANN Board,” and tells then where they can go to continue waiting for a response. This sad episode reminds one of some of the worst stories one hears about bureaucratic dithering. The applicants continue waiting for a timely response from ICANN. 24 applicants with over $4 million in applicant fees that sit in ICANN’s coffers, continue to sit in ICANN’s waiting rooms.

Five years after the gTLD round of 2012, applicants still wait for a response without hope. ICANN is now in the midst of discussing subsequent applications for new gTLDs. In this process, the ICANN Board asks the community when they will be ready to open applications for more gTLDs, yet cannot find time to get moving on solving this problem from previous rounds. I have discussed this problem in several blog posts in the past and find it amazing that after all this time the issue remains untouched by the ICANN Board.

The next step in solving this problem is actually rather easy. The applicants remain ready to work with ICANN on finding ways to solve this situation. There have been previous recommendations that a group of experts, from among the applicants, from ICANN staff, and from the technical community work together to discover a solution. Various mitigation strategies and technical solutions remain possible but unexplored and are begging to be discussed and worked on. It is unbelievable that 5 years after the submission, ICANN has not put together a task force to resolve this embarrassing lack of progress. Does ICANN hope the applicants tuck their tails behind them and walk away without a resolution?

The 3 domain names are often referred to by some in the technical community as toxic names because of the complexities that come from having been usurped for unapproved and dangerous private usage. The fact that these names are used improperly remains a risk for the Internet and constitutes a possible vector for attack. These so-called toxic domain names should be treated as any toxic threat to the environment, with a cleanup. The best way to cleanup the names remains to mitigate the risks, educate the public, and put the names into delegated service. The domain names .corp, .home, and .mail should be designated as an Internet ‘super site’ and plans should be immediately developed for cleaning up the situation.

Some claim that the names should just be put on a toxic reserved list and abandoned. Not only would this perpetuate the possible risks they pose to the Internet, it would encourage others to just grab any name they want and to use them until they become toxic. While ICANN takes its time to create deliberate well-formed programs for safe domain name delegation, it also continues its implicit invitation to just grab any name someone wants, knowing that there will be no response other than to allow the miscreants to continue using undelegated names with impunity. ICANN allows families and businesses to continue using names like .corp, .home, and .mail without any attempt to inform them of the problem or to protect them from the security risks the use of such undelegated names may cause.

It is hard to understand how ICANN could open up further applications for gTLDs while these applicants continue to dangle in the wind and while the Internet remains at risk from misuse of these Internet global resources. How could ICANN possibly collect more money from applicants when so many are left unresolved? How can an organization whose mission includes the stability and security of the Internet allow such a risk to continue unmitigated?

As ICANN 58 begins, one wonders how long this intolerable situation can be allowed to continue without well considered redress.

Written by Avri Doria, Researcher

Follow CircleID on Twitter

More under: DNS, DNS Security, Domain Names, ICANN, Security, Top-Level Domains

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post And the Wait Continues for .corp, .home and .mail Applicants appeared on IPv6.net.

Read more here:: IPv6 News Aggregator