buy kindle online

Setting Sail: Next Stop, Caribbean!

By Bevil Wooding

We are excited to announce the launch of new ARIN in the Caribbean 2018 activities and open registration for our first two events in the following locations:

Similar to our ARIN on the Road events, these are one-day programs featuring information on our services, as well as how we can help you and your organizations design, secure, and maintain robust networks and contribute to Internet numbering policy development for the region.

ARIN in the Caribbean events are free to attend and offer a great environment to learn and share. The program includes presentations on timely topics such as obtaining IPv6 addresses from ARIN and transfers of number resources. In addition, there will be presentations on current policy discussions, ARIN technical services, and best practices for building resilient Caribbean networks.

The agenda for our upcoming meetings will cover the following topics:

  • ARIN’s Mission and Core Functions
  • ARIN Technical Services
  • Policy Development at ARIN
  • ARIN and Caribbean Network Autonomy and Resilience
  • IPv4 Services – Waiting List, Transfers, and more
  • IPv6 and ASN Services – Obtaining Resources, Creating Networking Plans
  • ARIN Q&A – Open microphone to answer your questions!

Each day will conclude with an open microphone question and answer session, followed by a drawing for a $100 USD Amazon gift card for those who complete a short survey about the event.

Space is limited at each event, so if you are interested in attending one of our upcoming events please register on or before 2 February 2018!

If you are not available to join us in Grenada or Barbados, please visit our brand new ARIN in the Caribbean page for a list of other planned ARIN in the Caribbean events!

The post Setting Sail: Next Stop, Caribbean! appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

Antenova to show two new high performing 4G/LTE diversity antennas for small PCBs

By Zenobia Hegde

Antenova Ltd, manufacturer of antennas and RF antenna modules, is showing a brand new pair of high performing 4G/LTE antennas which are suitable for PCBs as small as 60mm, at the consumer electronics show CES. The two antennas can also be used in 3G and MIMO applications.

The two antennas are similar – the difference being that Inversa is built for the USA market while Integra is for European and Asian markets.

Both antennas are available in left and right versions to provide more options for placement on the PCB, and can be used singly or in pairs for MIMO. Both use beam steering to ensure good isolation and cross correlation, and achieve high performance.

Inversa, part numbers SR4L034-L /SR4L034-R, measures 28.0 x 8.0 x 3.3mm and covers the USA bands 698-798 MHz, 824-960 MHz, 1710-2170 MHz, 2300-2400 MHz and 2500-2690MHz.

Integra, part numbers SR4L049-L/SR4L049-R measures 23.0 x 8.0 x 3.3mm and covers the bands 791-960 MHz, 1710-2170 MHz, 2300-2400 MHz and 2500-2600 MHz, used in Europe and Asia.

Antenova has designed these antennas for use in small trackers, OBDs and other similar devices where space is limited. For more details, antenna samples and evaluation boards, please click here.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Antenova to show two new high performing 4G/LTE diversity antennas for small PCBs appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Fueled by Kafka, Stream Processing Poised for Growth

By Alex Woodie

Once a niche technique used only by the largest organizations, stream processing is emerging as legitimate technique for dealing with massive amounts of data generated every day. While it’s not needed for every data challenges, organizations are increasingly finding ways to incorporate stream processing into their plans — particularly with the rise of Kafka.

Stream processing is just that – processing data as soon as it arrives, as opposed to processing it after it lands. The amount of processing that is applied to the data as it flows can vary greatly. On the one hand, users may do very little besides a simple transformation, such as converting temperatures from Celsius into Fahrenheit or combining it with another stream, while at the upper end, stream processors may apply real-time analytics or machine learning algorithms.

Almost any type of data can be used in stream processing. Sources can include a database event from RDBMs or NoSQL, sensor data from the IoT, comments made on social media, or a credit card swipe. The data’s destination similarly can be diverse – it could be headed to a traditional file system, a relational or NoSQL database, a Hadoop data lake, or a cloud-based object store.

What happens in between that initial data creation event and when the data written to some type of permanent repository is collectively referred to as stream processing. Initially, proprietary products developed by the likes of TIBCO, Software AG, IBM, and others were developed to handle streaming data. But more recently, distributed, open source frameworks have emerged to deal with the massive surge in data generation.

Apache Kafka — a distributed publish and subscribe message queue that’s open source and relatively easy-to-use –by far is the most popular of these open source frameworks, and Kafka is seen today by industry insiders as helping to fuel the ongoing surge in demand for tools to work with stream data processing.

Steve Wilkes, the CTO and founder of Striim, says Kafka’s popularity is helping to push stream processing into the center stage. “Kafka is driving a lot of our market,” he says. “A good majority of our customers are utilizing Kafka in one way, shape, or form.”

The underlying trend driving investment in stream processing is that customers need access to the latest data, Wilkes says. “It’s the recognition that, no matter how you’re doing analytics — whether you’re doing them in streaming fashion or whether you’re doing them after the fact through some sort of Hadoop jobs or big data analytics you need that up-to-date data,” he tells Datanami.

Striim this week unveiled a new release of its stream data processing solution, Striim version 3.8, that features better support for Kafka. This includes the capability to automatically scale Striim to more efficiently read from and write to Kafka as users scale up their real-time streaming architecture.

Many Kafka users are using the core Kafka product, along with the open source Kafka Connect software, to rapidly move data from its source to another destination, such as Hadoop or a data lake hosted on the cloud. Fewer shops are using the Kafka Streams API to write application logic on top of the message bus, a niche that third-party vendors are moving to fill.

According to a recent report from Confluent, the company behind open source Kafka and developer of the Confluent Platform, 81% of Kafka customers are using it to build data pipelines. Other common use case include real-time monitoring, ETL, microservices, and building Internet of Things (IoT) products.

Keeping the data lake updated with fresh data is an increasingly difficult task – and one that stream processing is being asked to fill as a sort of modern ETL role. According to Syncsort‘s recent 2018 Big Data Trends survey, 75% of respondents say that keeping their data lake updated with changing data sources is either “somewhat” or “very difficult.”

Another vendor that’s seeing the Kafka impact is Streamsets, a software vendor that bills itself as the “air traffic control” for data in motion. Streamsets’ initial product was a data collector that automated some of the nitty gritty work involved in capturing and moving data, often atop the Kafka message queue. The vendor recently debuted a low-footprint data collector that works in CPU- and network-constrained environments, and cloud-based console for managing the entire flow of customer’s data.

Streamsets Vice President of Marketing Rick Bilodeau says Kafka is driving a lot of the company’s business. “We do a lot of work with customers for Kafka, for real-time event streaming,” he tells Datanami. “We see fairly broad Kafka adoption as a message queue, where people are using [Streamsets software] primarily to broker data in and out of the Kafka bus.”

Some of Streamsets customers have a million data pipelines running at the same time, which can lead to serious management challenges. “Companies will say, ‘We built a bunch of pipelines with Kafka, but now have a scalability problem. We can’t keep throwing people at it. It’s just taking us too long to put these things together,’” Bilodeau says. “So they use data collector to accelerate that process.”

Today, Streamsets sees lots of customers implementing real-time stream processing for Customer 360, cybersecurity, fraud detection, and industrial IoT use cases. Stream processing is still relatively new, but it’s beginning to grow in maturity rapidly, Bilodeau says.

“It’s not the first inning, for sure. It’s maybe the third inning,” he says. “On the Gartner Hype Cycle, it’s approaching early maturity. Every company seems to have something they want to do to with streaming data.”

Striim’s Wilkes agrees. Fewer than half of enterprises are working with streaming data pipelines, he estimates, but it’s growing solidly. “Streaming data wasn’t even really being talked about a few years ago,” he says. “But it’s really starting to get up to speed. There is a steady progression.”

We’re still mostly in the pipeline-building phase, where identifying data sources and creating data integrations dominates real-time discussions, Wilkes says. That period will give way to more advanced use cases and people become comfortable with the technology.

“We’re seeing that a lot of customers are still at the point of obtaining streaming sources. They understand the need to get a real-time data infrastructure,” he says. “The integration piece always comes first. The next stage after you have access to the streaming data is starting to think about the analytics.”

Related Items:

Spark Streaming: What Is It and Who’s Using It?

How Kafka Redefined Data Processing for the Streaming Age

The post Fueled by Kafka, Stream Processing Poised for Growth appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Sengled Partners with Baidu to Introduce China’s First Voice-Activated Smart Lamp Speaker

By IoT – Internet of Things

Sengled is partnering with leading AI company, Baidu, to introduce the Sengled Smart Lamp Speaker, a new voice-enabled lighting concept. Both companies will showcase Sengled Smart Lamp Speaker at the annual CES in Las Vegas, NV, January 9-12, 2018, with the official reveal commencing at the Baidu World @ Las Vegas on January 8, 2018. The […]

The post Sengled Partners with Baidu to Introduce China’s First Voice-Activated Smart Lamp Speaker appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Major Breakthroughs in Smart Devices and IoT from Taiwan to be Unveiled at CES 2018

By IoT – Internet of Things

CesKicking the new year off with a bang, the Taiwan External Trade Development Council (TAITRA), Taiwan’s foremost trade promotion organization, will provide a first look at brand new “smart” innovations from ITRI, AEON Motor, GEOSAT, Robotelf, and Taiwan Main Orthopedics at a January 8 press conference at CES 2018. The companies will showcase products poised […]

The post Major Breakthroughs in Smart Devices and IoT from Taiwan to be Unveiled at CES 2018 appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Predicting the maintenance

By Zenobia Hegde

It happens at the worst of times – late for a meeting, on the way to the rugby and even when you’re desperate for the bathroom. When your car breaks down, you can moan in retrospect, acknowledging the signs that it needed urgent maintenance. Thanks to technology, more specifically the evolution and application of cognitive learning, these frustrating occurrences will become a thing of the past.

Connecting the things

Analyst house Gartner forecasts that there will be 20.8 billion connected ‘things’ worldwide by 2020. Enterprises that stick to an old ‘preventive’ data methodology, says Mark Armstrong, managing director and vice-president International Operations, EMEA & APJ at Progress, are going to be left behind, as this approach accounts for a mere 20% of failures.

Predictive maintenance brings a proactive and resource saving opportunity. Predictive software can alert the manufacturer or user when equipment failure is imminent, but also carry out the maintenance process automatically ahead of time. This is calculated based on real time data, via metrics including pressure, noise, temperature, lubrication and corrosion to name a few.

Considering degradation patterns to illustrate the wear and tear of the vehicle in question, the production process is not subject to as high levels of interruption without the technology. By monitoring systems ‘as live’, breakdowns can be avoided prior to them happening.

It’s no longer a technological fantasy. Due to data in cars being collected for decades, researchers and manufacturers can gather insights that could be used to prepare predictive analytics. This will assist in predicting which individual cars will break down and need maintenance.

Now that the Internet of Things (IoT) is a reality, car manufacturers can use this information to offer timely and relevant additional customer services based on sophisticated software that can truly interrogate, interpret and use data. So who is going to be responsible for taking advantage of this technology?

Bolts and screws

Key management figures in the transport industry must commit to a maintenance management approach to implement a long-term technological solution. As described by R.Mobley, run-to-failure management sees an organisation refrain from spending money in advance, only reacting to machine or system failure. This reactive method may result in high overtime labour costs, high machine downtime and low productivity.

Similarly reactive, preventive maintenance monitors the mean-time-to-failure (MTTF), based on the principle that new equipment will be at its most vulnerable during the first few weeks of operation, as well as the longer it is used for. This can manifest itself in various guises, such as engine lubrication or major structural adjustments. However, predicting the time frame in which a machine will need to be reconditioned may be unnecessary and costly.

As an alternative option, predictive maintenance allows proactivity, ensuring lengthier time between scheduled repairs, whilst reducing the significant amount of crises that will have to be addressed due to mechanical faults. With a cognitive predictive model, meaning applications are able to teach themselves as they function, organisations will be able to foresee exactly why and when a machine will break down, allowing them to act […]

The post Predicting the maintenance appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Shipments of cellular M2M terminals to reach 13.7 million by 2022, says Berg Insight

By Zenobia Hegde

Berg Insight, the M2M/IoT market research provider, released new findings about the market for cellular M2M terminals. About 4.9 million cellular M2M terminals were shipped globally during 2016, an increase of 28.0% from the previous year.

Growing at a compound annual growth rate (CAGR) of 18.8%, this number is expected to reach 13.7 million in 2022. Berg Insight defines cellular terminals as standalone devices intended for connecting M2M applications to a cellular network. These include primarily general-purpose cellular routers, gateways and modems that are enclosed in a chassis and have at least one input/output port. Trackers, telematics devices and other specialised devices are excluded from this report.

North American and Asian vendors dominate the global cellular M2M terminal market. Cradlepoint, Sierra Wireless and Digi International are the largest vendors in North America, whilst SIMCom is the main manufacturer on the Asian market. Combined, these four vendors generated close to US$ 415 million (€349.15 million) in revenues from M2M terminal sales during 2016. This is equivalent to nearly 50% of the global market.

Other noteworthy vendors include CalAmp, Multitech Systems and Encore Networks in the US, Xiamen Four-Faith, Maestro Wireless and InHand Networks in Asia, Teltonika, HMS Networks, Advantech B+B SmartWorx, NetModule, Matrix Electrónica, Eurotech, Gemalto, Dr. Neuhaus and Option in Europe and NetComm Wireless in Australia.

A large number of small and medium sized vendors are active on the European market, whilst the North American market is dominated by a handful of major vendors, largely due to barriers in the form of carrier certifications required for cellular devices in the region.

“Adoption of 4G LTE in cellular routers, gateways and modems have increased rapidly in recent time due to increased focus on product life cycle costs and decommissioning of 2G networks”, said Fredrik Stålbrand, IoT analyst, Berg Insight. He adds that two thirds of the cellular M2M terminals sold globally during 2017 used 4G LTE as the main standard.

“LPWA technologies such as LTE Cat M1 and NB-IoT are expected to ease the transition from 2G to LTE networks further”, continued Mr. Stålbrand. In 2017, introductions of cellular M2M terminals featuring LTE Cat M1 and NB-IoT technologies were made by Encore Networks, Maestro Wireless and MultiTech Systems and several vendors plan to launch new products with LPWA connectivity during 2018.

Download report brochure here.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Shipments of cellular M2M terminals to reach 13.7 million by 2022, says Berg Insight appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Cloud Competition Intensifies – Rapid Growth Ahead for Microsoft Azure and Google Cloud Platform

By A.R. Guess

by Angela Guess According to a recent press release, “A new LogicMonitor® survey of nearly 300 industry influencers predicts that enterprises will migrate the majority of their IT workloads from the data center to the cloud by 2020. Fueling this transition will be the 20.8 billion IoT devices Gartner predicts will come online, and the […]

The post Cloud Competition Intensifies – Rapid Growth Ahead for Microsoft Azure and Google Cloud Platform appeared first on DATAVERSITY.

Read more here:: www.dataversity.net/feed/

Europe and North America will reach 65.2m active insurance telematics policies in 2021, Berg forecasts

By Zenobia Hegde

According to a new research report from the IoT analyst firm Berg Insight, the number of insurance telematics policies in force on the European market reached 6.8 million in Q4-2016. Growing at a compound annual growth rate (CAGR) of 34.8%, this number is expected to reach 30.0 million by 2021. In North America, the number of insurance telematics policies in force is expected to grow at a CAGR of 38.2% from 6.9 million in Q4-2016 to reach 35.2 million in 2021.

The European insurance telematics market is largely dominated by hardwired aftermarket black boxes while self-install OBD devices represent the vast majority of the active policies in North America. Several major US insurers have however recently shifted to solutions based on smartphones. Berg Insight expects a rapid increase in the uptake of smartphone-based solutions in all markets in the upcoming years.

“The US, Italy, the UK and Canada are still the largest markets in terms of insurance telematics policies”, said Martin Svegander, M2M/IoT analyst at Berg Insight. In North America, the market is dominated by US-based Progressive, Allstate, Liberty Mutual and State Farm as well as Intact Financial Corporation and Desjardins in Canada.

The Italian insurers UnipolSai and Generali together accounted for around 50% of the telematics-enabled policies in Europe. Insurers with a strong adoption of telematics-enabled policies in the UK moreover include Admiral Group, Insure The Box and Direct Line. Several insurers in the rest of Europe have also shown a substantial uptake of telematics in 2016–2017.

“Insurers are increasingly expected to embrace every aspect of telematics to reduce the cost of claims, improve the underwriting process and add services to increase the customer value through differentiated telematics offerings”, continued Mr. Svegander.

He added that several attempts to reduce distracted driving and increase consumer engagement using smartphone-based insurance telematics have been seen in both Europe and North America. “Consumer engagement is now the focus for most insurance telematics programmes and will continue to be an important topic in the near term”, concluded Svegander.

The insurance telematics value chain spans multiple industries including a large ecosystem of companies extending far beyond the insurance industry players. Automotive OEMs are showing an increasing interest in insurance telematics. Examples include General Motors, Ford, BMW, Daimler, PSA Group and Fiat. The vehicle manufacturers are expected to drive the long-term development of insurance telematics by offering the possibility to utilise connected car OEM data in pay-how-you-drive offers.

Notable aftermarket telematics service providers with a focus on insurance telematics include Octo Telematics with over 5.3 million active devices in Q4-2017 and other end-to-end solution providers such as Vodafone Automotive and Viasat Group. LexisNexis Risk Solutions, Intelligent Mechatronic Systems, Cambridge Mobile Telematics, Modus, The Floow, Scope Technologies and TrueMotion are also important players on the insurance telematics market.

Download the report brochure here: Insurance Telematics in Europe and North America.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Europe and North America will reach 65.2m active insurance telematics policies in 2021, Berg forecasts appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Face authentication and the future of security

By Zenobia Hegde

Apple’s iPhone X has given us a glimpse into the future of personal data security. By 2020 we’ll see billions of smart devices being used as mobile face authentication systems, albeit with varying degrees of security. The stuff of science fiction for years, face recognition will surpass other legacy biometric login solutions,such as fingerprint and iris scans, because of a new generation of AI-driven algorithms, says Kevin Alan Tussy, CEO of FaceTec.

The face recognition space had never received more attention than after the launch of Face ID, but with the internet now home to dozens of spoof videos fooling Face ID with twins, relatives and even olives for eyes, the expensive hardware solution has left many questioning if this is just another missed opportunity to replace passwords.

Face Recognition is a biometric method of identifying an authorised user by comparing the user’s face to the biometric data stored in the original enrolment. Once a positive match is made and the user’s liveness is confirmed the system grants account access.

A step up in security, Face Authentication (Identification + Liveness Detection), offers important and distinct security benefits: no PIN or password memorisation is required, there is no shared secret that can be stolen from a server, and the certainty the correct user is logging in is very high.

Apple’s embrace of Face ID has elevated face recognition into the public consciousness, and when compared to mobile fingerprint recognition, face recognition is far superior in terms of accuracy. According to Apple, their new face scanning technology is 20-times more secure than the fingerprint recognition currently used in the iPhone 8 (Touch ID) and Samsung S8. Using your face to unlock your phone is, of course, a great step forward, but is that all a face biometric can do? Not by a long shot.

While the goal of every new biometric has been to replace passwords, none have succeeded because most rely on special hardware that lacks liveness detection. Liveness detection, the key attribute of Authentication, verifies the correct user is actually present and alive at the time of login.

True 3D face authentication requires: identity verification plus depth sensing plus liveness detection. This means photos or videos cannot spoof the system, nor animated images like those created by CrazyTalk; and even 3D representations of a user like projections on foam heads, custom masks, and wax figures are rebuffed.

With the average price of a smartphone hovering around £150 (€170.58), expensive hardware-based solutions, no matter how good they get, won’t ever see widespread adoption. For a face authentication solution to be universally adopted it must be a 100% software solution that runs on the billions of devices with standard cameras that are already in use, and it must be be more secure than current legacy options (like fingerprint and 2D face).

A software solution like ZoOm from FaceTec can be quickly and easily integrated into nearly any app on just about any existing smart device. ZoOm can be deployed to millions of mobile users literally overnight, and provides […]

The post Face authentication and the future of security appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/