The French MP and Fields medal award winner, Cédric Villani, officially auditioned Constance Bommelaer de Leusse, the Internet Society’s Senior Director, Global Internet Policy, last Monday on national strategies for the future of artificial intelligence (AI). In addition, the Internet Society was asked to send written comments, which are reprinted here.
* * *
“Practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful […] Once in use, successful AI systems were simply considered valuable automatic helpers.”
AI is not new, nor is it magic. It’s about algorithms.
“Intelligent” technology is already everywhere — such as spam filters or systems used by banks to monitor unusual activity and detect fraud — and it has been for some time. What is new and creating a lot of interest from governments stems from recent successes in a subfield of AI known as “machine learning,” which has spurred the rapid deployment of AI into new fields and applications. It is the result of a potent mix of data availability, increased computer power and algorithmic innovation that, if well harnessed, could double economic growth rates by 2035.
So, governments’ reflection on what good policies should look like in this field is both relevant and timely. It’s also healthy for policymakers to organise a multistakeholder dialogue and empower their citizens to think critically about the future of AI and its impact on their professional and personal lives. In this regard, we welcome the French consultation.
I had a chance to explain the principles the Internet Society believes should be at the heart of AI norms, whether driven by industry or governments:
- Ethical considerations in deployment and design: AI system designers and builders need to apply a user-centric approach to the technology. They need to consider their collective responsibility in building AI systems that will not pose security risks to the Internet and its users.
- Ensure interpretability of AI systems: Decisions made by an AI agent should be possible to understand, especially if they have implications for public safety or result in discriminatory practices.
- Empower users: The public’s ability to understand AI-enabled services, and how they work, is key to ensuring trust in the technology.
- Responsible deployment: The capacity of an AI agent to act autonomously, and to adapt its behaviour over time without human direction, calls for significant safety checks before deployment and ongoing monitoring.
- Ensure accountability: Legal certainty and accountability has to be ensured when human agency is replaced by the decisions of AI agents.
- Consider social and economic impacts: Stakeholders should shape an environment where AI provides socioeconomic opportunities for all.
- Open Governance: The ability of various stakeholders, whether in civil society, government, private sector, academia or the technical community to inform and participate in the governance of AI is crucial for its safe deployment.
You can read more about how these principles translate into tangible recommendations here.
The audition organised by the French government also showed that the debate around AI is currently too narrow. So, we’d like to propose a few additional lenses to stage the debate about the future of AI in a helpful way.
Think holistically, because AI is everywhere
Current dialogues around AI usually focus on applications and services that are visible and interacting with our physical world, such as robots, self-driving cars and voice assistants. However, as our work on the Future of the Internet describes, the algorithms that structure our online experience are everywhere. The future of AI is not just about robots, but also about the algorithms that provide guidance to arrange the overwhelming amount of information from the digital world — algorithms that are intrinsic to the services we use in our everyday lives and a critical driver for the benefits that the Internet can offer.
The same algorithms are also part of systems that collect and structure information that impact how we perceive reality and make decisions in a much subtler and surprising way. They influence what we consume, what we read, our privacy, and how we behave or even vote. In effect, they place AI everywhere.
Look at AI through the Internet access lens
Another flaw in today’s AI conversation is that much of it is solely about security implications and how they could affect users’ trust in the Internet. As shown in our Future’s report, AI will also influence how you access the Internet in the very near future.
The growing size and importance of “AI-based” services, such as voice-controlled smart assistants for your home, means they are likely to become a main entry point to many of our online experiences. This could impact or exacerbate current challenges we see — including on mobile platforms — in terms of local content and access to platform-specific ecosystems for new applications and services.
Furthermore, major platforms are rapidly organising, leveraging AI through IoT to penetrate traditional industries. There isn’t a single aspect of our lives that will not be embedded in these platforms, from home automation and car infotainment to health care and heavy industries.
In the future, these AI platforms may become monopolistic walled gardens if we don’t think today about conditions to maintain competition and reasonable access to data.
Create an open and smart AI environment
To be successful and human centric, AI also needs to be inclusive. This means creating inclusive ecosystems, leveraging interdependencies between universities that can fuel business with innovation, and enabling governments to give access to qualitative and non-sensitive public data. Germany sets a good example: Its well-established multistakeholder AI ecosystem includes the German Research Center for Artificial Intelligence (DFKI), a multistakeholder partnership that is considered a blueprint for top-level research. Industry and Civil Sociey sit on the board of the DFKI to ensure research is application and business oriented.
Inclusiveness also means access to funding. There are many ways for governments to be useful, such as funding areas of research that are important to long term innovation.
Finally, creating a smart AI environment is about good, open and inclusive governance. Governments need to provide a regulatory framework that safeguards responsible AI, while supporting the capabilities of AI-based innovation. The benefits of AI will be highly dependent on the public’s trust in the new technology, and governments have an important role in working with all stakeholders to empower users and promote its safe deployment.
Learn more about Artificial Intelligence and explore the interactive 2017 Global Internet Report: Paths to Our Digital Future.
Take action! Send your comments on AI to Mission Villani and help shape the future.
Written by Constance Bommelaer, Senior Director, Global Internet Policy, Internet Society
Follow CircleID on Twitter
More under: Policy & Regulation
Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml
University of Cambridge’s Professor Ross Anderson explains why safety should be higher on the agenda than privacy. (From the Computerphile YouTube channel)
As we increasingly move towards an IoT world, vendors of safety-critical devices will be patching their systems just as regularly as phone and computer vendors do now. Researchers warn that many regulators who previously thought only in terms of safety will have to start thinking of security as well. From a recent project conducted by a research group at Computer Laboratory of the University of Cambridge for the European Commission, comes a report on what will happen to safety regulation once computers are embedded invisibly everywhere. This will require major changes to safety regulation and certification, the report warns.
— “At present, the regulation of safety is largely static, consisting of pre-market testing according to standards that change slowly if at all. Product recalls are rare, and feedback from post-market surveillance is slow, with a time constant of several years. In the future, safety with security will be much more dynamic; vendors of safety-critical devices will patch their systems once a month, just as phone and computer vendors do now. This will require major changes to safety regulation and certification, made more complex by multiple regulatory goals. For these reasons, a multi-stakeholder approach involving co-vigilance by multiple actors is inevitable.”
— “The EU is already the world’s main privacy regulator, as Washington doesn’t care and nobody else is big enough to matter … The strategic political challenge facing the European Union is whether it wants to be the world’s safety regulator. If it rises to this challenge, then just as engineers in Silicon Valley now consider Europe to be the world’s privacy regulator, they will defer to Europe on safety too. The critical missing resource is expertise on cybersecurity, and particularly for the European regulators and other institutions that will have to adapt to this new world.”
— “The strategic research challenge will include how we make systems more sustainable. At present, we have enough difficulty creating and shipping patches for two-year-old mobile phones. How will we continue to patch the vehicles we’re designing today when they are 20 or 30 years old? How can we create toolchains, libraries, APIs and test environments that can be maintained not just for years but for decades?”
Follow CircleID on Twitter
Read more here:: feeds.circleid.com/cid_sections/news?format=xml
Today we would like to join our community in celebrating the 5th Anniversary of World IPv6 Launch, a global event facilitated by the Internet Society when many organizations committed to permanently turning on IPv6. We’ve seen an enormous amount of IPv6 deployments in the ARIN region since the launch five years ago, and it has been incredible to witness how our community is embracing the future of the Internet.
We want to thank our members and community organizations for all of the hard work and dedication that they have put into spreading the word about IPv6. We’re proud to say that IPv6 requests at ARIN have remained steady over the years and more than 50% of our members have registered IPv6 to date (amazing!). We’re confident many organizations in our region are out there making their IPv6 deployments happen right now.
As of March 2017
For the past decade, ARIN has made outreach surrounding IPv4 depletion and IPv6 adoption a top priority. We have been engaging with enterprises across industries to encourage IPv6 deployment with our Get6 campaign, and throughout these last five years we have seen some very interesting things come out of many member organizations within our region. To name only a few, we have seen major access provider deployments in the region from industry leaders like Comcast, Verizon, T-Mobile, and Sprint. We have seen content related providers like Facebook, LinkedIn, Akamai, and Google make significant strides in providing IPv6-enabled content. In Canada, thanks to CIRA, we’ve seen growth in a coast-to-coast network of IXPs that enable small and medium local ISPs to offer IPv6 services. These companies and many more have made some incredible progress with IPv6 that helps give others the boost they need to take the steps toward deploying IPv6 within their own organization as well.
If you have taken steps to deploy IPv6 at your organization, thank you for your hard work and dedication. Let’s all continue to work together to make IPv6 the new normal.
Read more here:: teamarin.net/feed/
I like a conference that’s “Live”. Not just a lively crowd coalescing together to passionately discuss and debate matters of common interests, but more so in the sense of physical presence: things you can feel and touch. In the case of the TM Forum Live! 2017 event, held last week at Nice, France, it’s the Catalyst Pavilions where innovative solutions, best practices, and even exploratory experimentations were on full display.
Do I mean that for an IT Operation Support Systems (OSS) and Business Support Systems (BSS) trade show, you can touch it? Yep.
“Touching” in the sense that you can see and interact with real tools, platforms, and live demonstrations from live telecom networks, in real life deployments. You can see how concepts are developed into operational tools; you can touch tools that became operational platforms powering network and service convergences for service providers; you can come and visualize how disparate, siloed processes and manual work are being automated and integrated; and you can interact and even challenge why innovations haven’t delivered the results promised.
“Hands-on” is what really grabbed my attention.
IT operations optimization, data analytics, service quality improvements, customer-centric processes, interfaces, APIs — you can touch them all under one roof. That “hands-on” engagement is what makes TMF feel close: touch it, play with it, see how it would apply in your own world. Demonstrations and examples range from IT operation process automation, Quality of Services (QoS) for customer-centric operations models, to the Internet of Things (IoT), data analytics, platforms, and APIs.
So much has changed and evolved from the traditional OSS/BSS to what is now the OSS/BSS of the Network Function Virtualization (NFV) and Software-Defined Networking (SDN) landscape. The stodgy old OSS/BSS is challenged to go through a transformative change, which is driven by the demand for business agility.
Business agility is a reality for any IT department and for service providers who need to survive in a world quickly transformed by increasingly interactive service provider/subscriber relationships. Consumer demand for access is high, leading to fierce competition amongst providers for subscriber loyalty and creating business drivers for fast new service launches, targeted and personalized service packages, easy and on-demand self-service and self-authentication of services, and promotional sign-ons.
As a result, the collaboration between the Chief Marketing Officers’ (CMOs) department, Chief Information Officers’ (CIOs) department and Chief Technology Officers’ (CTOs) department has intensified. IT is no longer satisfied with being handed down business requirements by business groups such as Sales and Marketing and Product Management. IT has to strive to be a business partner. Service providers aligning their organization to achieve business agility are merging their traditional Network Engineering functions and back-office IT organization all under one executive branch of the CTO. The goal is to drive DevOps agility and faster time to deployment.
Leveraging technology to create business agility is easier said than done, as often lamented by people working in the trenches. A lot of it has to do with integrating legacy systems, but it’s also related to what I would call “self-inflicted” processes and workflows built for yesterday’s market and subscribers. Today, the combination of 4G LTE fast speed broadband connections, Google searches that put information at consumers’ fingertips, the omnipresent and accessibility of information as organizations digitize their assets, and the power of video from companies like Google, YouTube, Facebook, and Twitter are changing our lives. This reflects and changes the way service providers interact with their target audiences. Business agility is not simply a buzzword, but a matter of survival!
How do I create the stickiness with my existing subscribers? How do I acquire prospective subscribers? How do I entice subscribers to spend more with me? Service providers need solutions to these questions. They need to access data, such as subscriber usage patterns, which turns data into intelligence. Then they can turn intelligence into service packages, show that service package to the most profitable customers, and activate and provision that service package fast, reliably, and securely. Even more, the services need to be available anywhere; when the subscriber is at home, in the car, on the train, in a stadium, or at a concert hall.
Service activation, service provisioning, and automatic provisioning which allows a subscriber to self-authenticate, sign in, or sign up, is at the center of this OSS/BSS transformation.
Over the years, trade shows and presentations are getting more and more colorful charts and diagrams — models updating themselves before we can digest the previous ones. Flow charts and puzzle pieces fly off the wall. What we need is simplification. What we need is a common sense approach to good old fashioned problem solving:
- Establish the goals
- Map out roles and responsibilities
- Identify obstacles
- Set up checkpoints of the strongest and weakest links
- Articulate risk mitigation measures
Before we can achieve service orchestration agility, we need to retool our human processes, workflows, and interactions; our “workforce orchestration!”
As I observe and participate in this industry as a technologist, I explore, first and foremost, the role of technology in changing human behaviors and interactions. Technology is a means to the goal instead of the goal itself. It helps facilitate change. Digitization helps drive business process transformation and delivers agility only if human behavior changes. We need to put transparency and collaboration to one of the legs of the success stool.
One has to feel energized coming out of the conference, armed with all of these lessons. The event has always been in a perfect setting, and this year, Nice delivered — three perfect, sunny days. I couldn’t help but sound out “#ILoveNice”, as I took full advantage of the beautiful French Riviera seaside scenery for my daily morning runs.
We’ll be back next year!
Written by William L. Yan, Chief Operating Officer at Incognito Software Systems
Follow CircleID on Twitter
More under: Telecom
Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml