IPv6.net https://ipv6.net/ The IPv6 and IoT Resources Fri, 07 Nov 2025 15:37:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare https://ipv6.net/news/diy-byoip-a-new-way-to-bring-your-own-ip-prefixes-to-cloudflare/ Fri, 07 Nov 2025 15:37:05 +0000 https://ipv6.net/?p=2886859 When a customer wants to bring IP address space to Cloudflare, they’ve always had to reach out to their account team to put in a request. This request would then be sent to various Cloudflare engineering teams such as addressing and network engineering — and then the team responsible for the particular service they wanted […]

The post DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare appeared first on IPv6.net.

]]>

When a customer wants to bring IP address space to Cloudflare, they’ve always had to reach out to their account team to put in a request. This request would then be sent to various Cloudflare engineering teams such as addressing and network engineering — and then the team responsible for the particular service they wanted to use the prefix with (e.g., CDN, Magic Transit, Spectrum, Egress). In addition, they had to work with their own legal teams and potentially another organization if they did not have primary ownership of an IP prefix in order to get a Letter of Agency (LOA) issued through hoops of approvals. This process is complex, manual, and  time-consuming for all parties involved — sometimes taking up to 4–6 weeks depending on various approvals. 

Well, no longer! Today, we are pleased to announce the launch of our self-serve BYOIP API, which enables our customers to onboard and set up their BYOIP prefixes themselves.

With self-serve, we handle the bureaucracy for you. We have automated this process using the gold standard for routing security — the Resource Public Key Infrastructure, RPKI. All the while, we continue to ensure the best quality of service by generating LOAs on our customers’ behalf, based on the security guarantees of our new ownership validation process. This ensures that customer routes continue to be accepted in every corner of the Internet.

Cloudflare takes the security and stability of the whole Internet very seriously. RPKI is a cryptographically-strong authorization mechanism and is, we believe, substantially more reliable than common practice which relies upon human review of scanned documents. However, deployment and availability of some RPKI-signed artifacts like the AS Path Authorisation (ASPA) object remains limited, and for that reason we are limiting the initial scope of self-serve onboarding to BYOIP prefixes originated from Cloudflare’s autonomous system number (ASN) AS 13335. By doing this, we only need to rely on the publication of Route Origin Authorisation (ROA) objects, which are widely available. This approach has the advantage of being safe for the Internet and also meeting the needs of most of our BYOIP customers. 

Today, we take a major step forward in offering customers a more comprehensive IP address management (IPAM) platform. With the recent update to enable multiple services on a single BYOIP prefix and this latest advancement to enable self-serve onboarding via our API, we hope customers feel empowered to take control of their IPs on our network.

An evolution of Cloudflare BYOIP

We want Cloudflare to feel like an extension of your infrastructure, which is why we originally launched Bring-Your-Own-IP (BYOIP) back in 2020

A quick refresher: Bring-your-own-IP is named for exactly what it does – it allows customers to bring their own IP space to Cloudflare. Customers choose BYOIP for a number of reasons, but the main reasons are control and configurability. An IP prefix is a range or block of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet. When a customer’s Cloudflare services are configured to use the customer’s own addresses, onboarded to Cloudflare as BYOIP, a packet with a corresponding destination address will be routed across the Internet to Cloudflare’s global edge network, where it will be received and processed. BYOIP can be used with our Layer 7 services, Spectrum, or Magic Transit. 

A look under the hood: How it works

Today’s world of prefix validation

Let’s take a step back and take a look at the state of the BYOIP world right now. Let’s say a customer has authority over a range of IP addresses, and they’d like to bring them to Cloudflare. We require customers to provide us with a Letter of Authorization (LOA) and have an Internet Routing Registry (IRR) record matching their prefix and ASN. Once we have this, we require manual review by a Cloudflare engineer. There are a few issues with this process:

  • Insecure: The LOA is just a document—a piece of paper. The security of this method rests entirely on the diligence of the engineer reviewing the document. If the review is not able to detect that a document is fraudulent or inaccurate, it is possible for a prefix or ASN to be hijacked.

  • Time-consuming: Generating a single LOA is not always sufficient. If you are leasing IP space, we will ask you to provide documentation confirming that relationship as well, so that we can see a clear chain of authorisation from the original assignment or allocation of addresses to you. Getting all the paper documents to verify this chain of ownership, combined with having to wait for manual review can result in weeks of waiting to deploy a prefix!

Automating trust: How Cloudflare verifies your BYOIP prefix ownership in minutes

Moving to a self-serve model allowed us to rethink the manner in which we conduct prefix ownership checks. We asked ourselves: How can we quickly, securely, and automatically prove you are authorized to use your IP prefix and intend to route it through Cloudflare?

We ended up killing two birds with one stone, thanks to our two-step process involving the creation of an RPKI ROA (verification of intent) and modification of IRR or rDNS records (verification of ownership). Self-serve unlocks the ability to not only onboard prefixes more quickly and without human intervention, but also exercises more rigorous ownership checks than a simple scanned document ever could. While not 100% foolproof, it is a significant improvement in the way we verify ownership.

Tapping into the authorities

Regional Internet Registries (RIRs) are the organizations responsible for distributing and managing Internet number resources like IP addresses. They are composed of 5 different entities operating in different regions of the world (RIRs). Originally allocated address space from the Internet Assigned Numbers Authority (IANA), they in turn assign and allocate that IP space to Local Internet Registries (LIRs) like ISPs.

This process is based on RIR policies which generally look at things like legal documentation, existing database/registry records, technical contacts, and BGP information. End-users can obtain addresses from an LIR, or in some cases through an RIR directly. As IPv4 addresses have become more scarce, brokerage services have been launched to allow addresses to be leased for fixed periods from their original assignees.

The Internet Routing Registry (IRR) is a separate system that focuses on routing rather than address assignment. Many organisations operate IRR instances and allow routing information to be published, including all five RIRs. While most IRR instances impose few barriers to the publication of routing data, those that are operated by RIRs are capable of linking the ability to publish routing information with the organisations to which the corresponding addresses have been assigned. We believe that being able to modify an IRR record protected in this way provides a good signal that a user has the rights to use a prefix.

Example of a route object containing validation token (using the documentation-only address 192.0.2.0/24):

% whois -h rr.arin.net 192.0.2.0/24

route:          192.0.2.0/24
origin:         AS13335
descr:          Example Company, Inc.
                cf-validation: 9477b6c3-4344-4ceb-85c4-6463e7d2453f
admin-c:        ADMIN2521-ARIN
tech-c:         ADMIN2521-ARIN
tech-c:         CLOUD146-ARIN
mnt-by:         MNT-CLOUD14
created:        2025-07-29T10:52:27Z
last-modified:  2025-07-29T10:52:27Z
source:         ARIN

For those that don’t want to go through the process of IRR-based validation, reverse DNS (rDNS) is provided as another secure method of verification. To manage rDNS for a prefix — whether it’s creating a PTR record or a security TXT record — you must be granted permission by the entity that allocated the IP block in the first place (usually your ISP or the RIR).

This permission is demonstrated in one of two ways:

  • Directly through the IP owner’s authenticated customer portal (ISP/RIR).

  • By the IP owner delegating authority to your third-party DNS provider via an NS record for your reverse zone.

Example of a reverse domain lookup using dig command (using the documentation-only address 192.0.2.0/24):

% dig cf-validation.2.0.192.in-addr.arpa TXT

; <<>> DiG 9.10.6 <<>> cf-validation.2.0.192.in-addr.arpa TXT
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16686
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cf-validation.2.0.192.in-addr.arpa. IN TXT

;; ANSWER SECTION:
cf-validation.2.0.192.in-addr.arpa. 300 IN TXT "b2f8af96-d32d-4c46-a886-f97d925d7977"

;; Query time: 35 msec
;; SERVER: 127.0.2.2#53(127.0.2.2)
;; WHEN: Fri Oct 24 10:43:52 EDT 2025
;; MSG SIZE  rcvd: 150

So how exactly is one supposed to modify these records? That’s where the validation token comes into play. Once you choose either the IRR or Reverse DNS method, we provide a unique, single-use validation token. You must add this token to the content of the relevant record, either in the IRR or in the DNS. Our system then looks for the presence of the token as evidence that the request is being made by someone with authorization to make the requested modification. If the token is found, verification is complete and your ownership is confirmed!

The digital passport 🛂

Ownership is only half the battle; we also need to confirm your intention that you authorize Cloudflare to advertise your prefix. For this, we rely on the gold standard for routing security: the Resource Private Key Infrastructure (RPKI), and in particular Route Origin Authorization (ROA) objects.

A ROA is a cryptographically-signed document that specifies which Autonomous System Number (ASN) is authorized to originate your IP prefix. You can think of a ROA as the digital equivalent of a certified, signed, and notarised contract from the owner of the prefix.

Relying parties can validate the signatures in a ROA using the RPKI.You simply create a ROA that specifies Cloudflare’s ASN (AS13335) as an authorized originator and arrange for it to be signed. Many of our customers used hosted RPKI systems available through RIR portals for this. When our systems detect this signed authorization, your routing intention is instantly confirmed. 

Many other companies that support BYOIP require a complex workflow involving creating self-signed certificates and manually modifying RDAP (Registration Data Access Protocol) records—a heavy administrative lift. By embracing a choice of IRR object modification and Reverse DNS TXT records, combined with RPKI, we offer a verification process that is much more familiar and straightforward for existing network operators.

The global reach guarantee

While the new self-serve flow ditches the need for the “dinosaur relic” that is the LOA, many network operators around the world still rely on it as part of the process of accepting prefixes from other networks.

To help ensure your prefix is accepted by adjacent networks globally, Cloudflare automatically generates a document on your behalf to be distributed in place of a LOA. This document provides information about the checks that we have carried out to confirm that we are authorised to originate the customer prefix, and confirms the presence of valid ROAs to authorise our origination of it. In this way we are able to support the workflows of network operators we connect to who rely upon LOAs, without our customers having the burden of generating them.


Staying away from black holes

One concern in designing the Self-Serve API is the trade-off between giving customers flexibility while implementing the necessary safeguards so that an IP prefix is never advertised without a matching service binding. If this were to happen, Cloudflare would be advertising a prefix with no idea on what to do with the traffic when we receive it! We call this “blackholing” traffic. To handle this, we introduced the requirement of a default service binding — i.e. a service binding that spans the entire range of the IP prefix onboarded. 

A customer can later layer different service bindings on top of their default service binding via multiple service bindings, like putting CDN on top of a default Spectrum service binding. This way, a prefix can never be advertised without a service binding and blackhole our customers’ traffic.


Getting started

Check out our developer docs on the most up-to-date documentation on how to onboard, advertise, and add services to your IP prefixes via our API. Remember that onboardings can be complex, and don’t hesitate to ask questions or reach out to our professional services team if you’d like us to do it for you.

The future of network control

The ability to script and integrate BYOIP management into existing workflows is a game-changer for modern network operations, and we’re only just getting started. In the months ahead, look for self-serve BYOIP in the dashboard, as well as self-serve BYOIP offboarding to give customers even more control.

Cloudflare’s self-serve BYOIP API onboarding empowers customers with unprecedented control and flexibility over their IP assets. This move to automate onboarding empowers a stronger security posture, moving away from manually-reviewed PDFs and driving RPKI adoption. By using these API calls, organizations can automate complex network tasks, streamline migrations, and build more resilient and agile network infrastructures.

Read more here: https://blog.cloudflare.com/diy-byoip/

The post DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare appeared first on IPv6.net.

]]>
Nexalta Guardian NXG0042AI – Qualcomm IPQ9574-based networking solution with 10GbE, WiFi 7, 5G, local AI (Crowdfunding) https://ipv6.net/news/nexalta-guardian-nxg0042ai-qualcomm-ipq9574-based-networking-solution-with-10gbe-wifi-7-5g-local-ai-crowdfunding/ Fri, 07 Nov 2025 11:37:06 +0000 https://ipv6.net/?p=2886827 Nexalta Guardian NXG0042AI is a large networking and IoT gateway “monster” powered by a Qualcomm IPQ9574 chipset offering 10GbE and 2.5GbE networking, WiFi 7 support with up to five concurrent WiFi radios, up to four/seven 5G modems, up to 52 (e)SIMs, and two on-board Compute Module slots for Rockchip RK3588 or NVIDIA Jetson SO-DIMM system-on-modules. […]

The post Nexalta Guardian NXG0042AI – Qualcomm IPQ9574-based networking solution with 10GbE, WiFi 7, 5G, local AI (Crowdfunding) appeared first on IPv6.net.

]]>
Nexalta Guardian NXG0042AI

Nexalta Guardian NXG0042AI is a large networking and IoT gateway “monster” powered by a Qualcomm IPQ9574 chipset offering 10GbE and 2.5GbE networking, WiFi 7 support with up to five concurrent WiFi radios, up to four/seven 5G modems, up to 52 (e)SIMs, and two on-board Compute Module slots for Rockchip RK3588 or NVIDIA Jetson SO-DIMM system-on-modules. The main board (XWR A1) also offers 25Gbps WAN fabric, three MCUs for BMC, power management, and cryptography, M.2 slots for NVMe storage or AI accelerators, two mSATA sockets, HDMI video output, USB ports, Starlink satellite support, and a range of IoT protocol options such as LoRaWAN, 5G RedCap, UWB, Bluetooth LE, and so on. Nexalta Guardian specifications: Main SoC subsystem SoC – Qualcomm IPQ9574 quad-core Arm Cortex-A73 processor @ 2.2GHz processor System Memory – 2GB DDR4 RAM Storage –  8GB eMMC flash, 64MB NOR flash SoM sub-system 2x 260-pin SO-DIMM sockets for: Turing RK1 […]

The post Nexalta Guardian NXG0042AI – Qualcomm IPQ9574-based networking solution with 10GbE, WiFi 7, 5G, local AI (Crowdfunding) appeared first on CNX Software – Embedded Systems News.

Read more here: https://www.cnx-software.com/2025/11/07/nexalta-guardian-nxg0042ai-qualcomm-ipq9574-based-networking-solution-with-10gbe-wifi-7-5g-local-ai/

The post Nexalta Guardian NXG0042AI – Qualcomm IPQ9574-based networking solution with 10GbE, WiFi 7, 5G, local AI (Crowdfunding) appeared first on IPv6.net.

]]>
How AI and PropTech Are Reshaping Commercial Real Estate Investing https://ipv6.net/news/how-ai-and-proptech-are-reshaping-commercial-real-estate-investing/ Fri, 07 Nov 2025 10:37:08 +0000 https://ipv6.net/?p=2886810 AI and PropTech are fundamentally rewriting commercial real estate investing. With 60% of firms now using AI for leasing and asset management, discover how smart building data and predictive analytics are creating new advantages, and hidden risks, for investors. How AI and PropTech Are Reshaping Commercial Real Estate Investing The commercial real estate (CRE) market […]

The post How AI and PropTech Are Reshaping Commercial Real Estate Investing appeared first on IPv6.net.

]]>

AI and PropTech are fundamentally rewriting commercial real estate investing. With 60% of firms now using AI for leasing and asset management, discover how smart building data and predictive analytics are creating new advantages, and hidden risks, for investors.

How AI and PropTech Are Reshaping Commercial Real Estate Investing
How AI and PropTech Are Reshaping Commercial Real Estate Investing

The commercial real estate (CRE) market is undergoing a major transformation, driven by the rise of artificial intelligence (AI) and property technology (PropTech). In 2023, the global PropTech market was valued at US$18.2 billion and is projected to grow at a 15% CAGR to reach US$52.2 billion by 2030. 

Meanwhile, AI adoption in real estate is also skyrocketing, with over 60% of real estate firms currently leveraging AI to streamline processes like leasing, asset management, and valuation. AI and PropTech are providing new ways to enhance operational efficiency, predict market trends, and manage assets with greater precision. 

These innovations are not only improving the way buildings are managed but also reshaping how investors make decisions, manage risks, and optimise returns.

The new landscape: Why AI & proptech matter

If you picture a typical CRE investor a decade ago, you might imagine someone poring over folders of leases, physically inspecting buildings and relying on historic data for valuations. Today, that’s changing fast. According to recent research, AI tools are already used or being piloted by more than 60 % of large real‑estate organisations for functions such as lease administration, occupancy management and investment assistance. Meanwhile, the PropTech space—covering smart building systems, big‑data analytics, IoT sensors and more, is forecast to accelerate rapidly.

Why does this matter for an investor in CRE? Because the ways we identify risk, assess value, operate assets and engage tenants are being enhanced (and disrupted) by these tech‑tools. If you’re investing in an office block, a warehouse, or a retail centre, you’ll soon find that your asset isn’t just a property but a “smart” asset with data flowing about occupancy, running costs, energy use, tenant behaviour, and market movements.

Four big areas of transformation

Let’s dig into four key areas where AI and PropTech are reshaping CRE.

1. Better valuations & risk analysis

Valuations and risk assessments used to be quite manual: comparable buildings, rental history, market trends etc. Now AI is helping crunch vast datasets and pick up patterns humans might miss. For instance, one study used machine‑learning with property images and traditional data to estimate value with higher accuracy.

Also, AI is being used in “smart building” risk‑systems: satellites, sensors, image‑analysis help understand whether a building may have structural issues, or may face higher maintenance costs.

That means investors can:

  • Estimate future cash‑flows more precisely.
  • Spot buildings with hidden risk (location shifts, changing tenant demand) earlier.
  • Price in things like energy costs, tenant churn, renewal risk more effectively.

2. Smarter asset & operational management

Owning a commercial property means dealing with tenants, leases, building systems, repairs, and utilities. PropTech is making that smoother. Examples: smart sensors in HVAC systems track usage and detect issues early; AI chatbots help tenant services; data dashboards show occupancy patterns.

For investors, this matters because lower operational costs + better tenant retention = higher returns. Also, the “experience” for tenants becomes more modern and appealing, which in turn supports occupancy and rental growth.

3. Improved investment decision‑making

When you decide whether to buy a warehouse or office block, you want a good picture of future demand, likely rental rates, vacancy risk and exit value. AI and PropTech give more data, more speed, more insight. Tools now exploit big data: demographics, mobility patterns, remote‑work trends, logistics flows etc. 

For example: investors in logistics real estate can track e‑commerce growth routes, last‑mile delivery demand and decide where a warehouse will be in demand in 5‑10 years. Or in office space, monitor how many firms are downsizing or switching to hybrid, see which buildings are “smart” and therefore likely to win tenants. The real estate industry itself says AI will support occupancy, workplace strategy and investment management.

4. New types of investment & access

PropTech is also lowering barriers and opening new ways to invest. Think: tokenisation of real estate assets, digital platforms allowing smaller investors to pool funds or access commercial property markets via online marketplaces. The research shows PropTech is increasingly merging with FinTech.

This means: smaller investors may gain exposure to CRE; transactions may become faster; more transparency. For large investors, it also means competition will increase and the “edge” may come from who uses tech best.

Practical implications for you as an investor

What does this mean in practical terms?

  • Due diligence must include tech‑readiness. Does the building have smart systems? How much data is available on occupancy, tenancy, and maintenance history?
  • Operational efficiency equals value. A building with outdated systems may hide costs that AI and sensors will expose – this may drag yields.
  • Tenant experience is a differentiator. Smart buildings attract better tenants, renewals may be easier, rent premium may follow.
  • Market dynamics are shifting. For instance, office demand is not what it used to be in many markets. Investors must use data (and tech tools) to avoid the “wrong asset in the wrong place” trap.
  • Exit strategy must factor in tech‑risk. Buildings that cannot adapt or upgrade may face obsolescence risk. That reduces resale value.
  • Consider access and liquidity. PropTech platforms for fractional ownership or REITs with tech‑enabled assets may provide a way in without owning entire buildings.

But it’s not all smooth sailing

Of course, with innovation come new challenges.

  • Data quality and integration. AI is only as good as the data it uses. Many CRE assets still have siloed, poor data and integrating various systems is difficult.
  • Upfront cost and retrofitting. Upgrading a building to smart status can cost money. If you buy without factoring that in, you may get surprises.
  • Cybersecurity and privacy. With sensors, occupancy data and building‑systems connected, risk of breach increases. Investors must ask: are systems secure? Do they comply with regulations?
  • Human element. Even the best AI tools need human oversight. Decisions about tenants, building renovations, community fit, market shifts still require judgement. The tech supports but doesn’t replace.
  • Changing market fundamentals. Tech will help, but it won’t fully overcome large forces such as location, macro economic shifts, regulation or tenant behaviour.

What the future might bring

So what might the next 5‑10 years in CRE look like with AI and PropTech deeply embedded?

  • Smart buildings will become standard. Not just bells & whistles but core features: energy efficiency, flexible use of space, IoT sensors, tenant satisfaction dashboards.
  • Predictive asset management. Maintenance issues identified before they become costly; occupancy shifts flagged; lease renewal risks highlighted early.
  • More real‑time data and decision cycles. Investors won’t wait months for due diligence reports, they’ll get live dashboards showing building health, tenant habits, market trends.
  • New investment models. Fractional ownership, tokenisation, co‑investment platforms, digital marketplaces for CRE assets.
  • Greater sustainability and ESG integration. Tech will help track emissions, water use, energy, and investors will value properties that are “green” and digitally enabled.
  • Hybrid use and adaptive assets. Especially for offices/retail, buildings may be more flexible: co‑working, mixed uses, demand‑led redeployment of space, all enabled by tech tracking use patterns.

Final thoughts

If you’re investing in commercial real estate today, ignoring AI and PropTech is no longer an option. They’re not just fancy add‑ons, they’re becoming core enablers of value, risk mitigation and competitive advantage.

That said, this isn’t about buying a “smart building” label and calling it a day. What matters is how well you integrate tech into your asset strategy: assessing data readiness, upgrading systems when needed, managing tenants proactively, and keeping a human‑centred approach to decision‑making.

In many ways, the modern CRE investor is part property person, part tech strategist. The buildings are no longer passive boxes, they’re data‑rich, interactive assets. Use that to your benefit, and you’ll be far better placed in the future‑ready commercial real estate market.

The post How AI and PropTech Are Reshaping Commercial Real Estate Investing appeared first on IntelligentHQ.

Read more here: https://www.intelligenthq.com/ai-and-proptech-reshaping-commercial-real-estate-investing/

The post How AI and PropTech Are Reshaping Commercial Real Estate Investing appeared first on IPv6.net.

]]>
Unlocking Innovation: How a Technology Drive Can Accelerate Your Business https://ipv6.net/news/unlocking-innovation-how-a-technology-drive-can-accelerate-your-business/ Fri, 07 Nov 2025 09:37:06 +0000 https://ipv6.net/?p=2886805 In today’s fast-paced business world, staying ahead means keeping up with technology. A strong technology drive isn’t just about buying new gadgets; it’s about making smart moves that help your company grow and adapt. This article looks at how pushing forward with technology can really make a difference for your business, from understanding the changes […]

The post Unlocking Innovation: How a Technology Drive Can Accelerate Your Business appeared first on IPv6.net.

]]>

In today’s fast-paced business world, staying ahead means keeping up with technology. A strong technology drive isn’t just about buying new gadgets; it’s about making smart moves that help your company grow and adapt. This article looks at how pushing forward with technology can really make a difference for your business, from understanding the changes happening around us to putting new ideas into action. We’ll cover how to build a solid tech foundation, get your team on board, and even work with others to make sure your technology drive leads to real success.

Key Takeaways

  • Businesses need to be quick and able to change because technology and the digital world are always moving forward. A good technology drive helps with this.
  • To really innovate, everyone in the company needs to be on the same page about what innovation means and how to achieve it.
  • Updating your tech, like moving to the cloud or using AI, can make your business run better and find new ways to make money.
  • Getting everyone involved and making it okay to try new things, even if they don’t always work out, is important for a company that wants to innovate.
  • Working with other companies or startups can bring in new ideas and help you reach more customers, boosting your technology drive.

Embracing a Technology Drive for Business Growth

Understanding the Evolving Digital Landscape

The business world is changing fast. New technologies pop up all the time, and what worked yesterday might not work tomorrow. Companies that don’t keep up risk falling behind. It’s like trying to race a car with old tires – you just won’t get very far. Staying current with technology isn’t just a good idea; it’s necessary for survival and growth. This means paying attention to new tools, platforms, and ways of doing things that can make your business run better or offer new products and services.

The Imperative for Nimbleness and Adaptability

Because the digital world moves so quickly, businesses need to be able to change direction without too much trouble. Think of a large ship versus a speedboat. The speedboat can turn and react much faster. Companies need to build that kind of agility into their operations. This means not getting too stuck in old ways of thinking or working. Being able to adapt quickly allows you to take advantage of new opportunities as they appear and to handle unexpected challenges.

Here are a few reasons why being nimble is so important:

  • Market Shifts: Customer needs and market demands can change overnight. Being adaptable lets you respond effectively.
  • Competitive Pressure: Competitors are always looking for an edge. Quick adaptation helps you stay ahead or catch up.
  • Technological Advancements: New technologies can disrupt entire industries. Being ready to adopt them is key.

Adapting to new technologies and market changes requires a willingness to experiment and learn. It’s about building a company that can pivot when needed, rather than resisting change.

Strategic Vision for Innovation and Growth

Simply adopting new technology isn’t enough. You need a clear plan for how technology will help your business grow and innovate. This involves looking ahead and deciding where you want your company to be in the future and what role technology will play in getting you there. It’s about more than just fixing current problems; it’s about creating new possibilities and staying competitive long-term. A good strategy helps align everyone in the company towards common goals, making sure technology investments are purposeful and contribute to overall success.

Key Strategies to Fuel Your Technology Drive

Gears and circuits symbolizing business technology innovation.

In today’s fast-paced business environment, staying ahead means actively pursuing innovation. This isn’t just about having new ideas; it’s about having a structured approach to make those ideas a reality and drive business growth. To truly make a technology drive work for your company, you need clear strategies.

Defining Innovation: A Common Understanding

Before you can innovate, everyone in the company needs to agree on what innovation means for your business. It’s not a one-size-fits-all concept. Is it about creating entirely new products, improving existing processes, or finding new ways to serve customers? Having a shared definition prevents confusion and ensures everyone is working towards the same goals. This clarity is the first step in building a successful innovation engine.

Guiding Principles for Innovation Success

Once you know what innovation looks like, you need principles to guide your efforts. These are the fundamental rules that shape how your company approaches new ideas and projects. Think of them as the guardrails that keep your innovation initiatives on track.

Here are some principles to consider:

  • Focus on Business Value: Every innovation effort should clearly link back to a business objective, whether it’s increasing revenue, reducing costs, or improving customer satisfaction.
  • Embrace Experimentation: Allow for trying new things, even if they might not work out. Learning from experiments, successful or not, is key to progress.
  • Customer-Centricity: Keep the end-user at the heart of all innovation. Understand their needs and pain points to create solutions that truly matter.
  • Agility and Adaptability: Be prepared to pivot. The market changes, and your innovation strategy should be flexible enough to adapt.

A common pitfall is to chase every shiny new technology without a clear purpose. Having guiding principles helps ensure that your technology investments are strategic and aligned with your overall business direction, rather than just a reaction to trends.

Leveraging Diverse Teams for Speed to Market

Innovation often happens at the intersection of different perspectives. Bringing together people from various departments, backgrounds, and skill sets can spark creativity and lead to more robust solutions. Diverse teams can identify problems and opportunities that a homogenous group might miss.

Consider the following benefits of diverse teams:

  • Broader Idea Generation: Different viewpoints lead to a wider range of ideas.
  • Improved Problem-Solving: Varied experiences offer unique approaches to overcoming challenges.
  • Faster Decision-Making: When diverse perspectives are considered early, decisions are often more well-rounded and quicker to implement.

Forming cross-functional teams, perhaps involving members from IT, marketing, sales, and operations, can significantly speed up the process from idea conception to market launch. This collaborative approach helps to identify potential roadblocks early and ensures that solutions are practical and well-received by the market. For finance professionals looking to integrate technology for better insights, diverse teams are also key to understanding new processes.

By establishing a clear definition of innovation, adhering to guiding principles, and building diverse teams, your business can create a powerful engine for continuous growth and adaptation.

Modernizing Your Technology Foundation

To truly accelerate innovation, your business needs a solid, up-to-date technology base. Think of it like building a house; you wouldn’t start with a shaky foundation. Modernizing your tech means making sure your systems are not just functional but also flexible and ready for what’s next. This isn’t just about keeping up; it’s about creating a platform that actively supports your growth and competitive edge.

Cloud Transformation for Business Value

Moving to the cloud is more than just a trend; it’s a strategic move that can significantly change how your business operates. It offers a way to optimize your IT spending, making operations more efficient. Instead of managing physical servers, you can access computing power and storage as needed. This flexibility means you can scale up quickly when demand is high and scale down when it’s not, paying only for what you use. This approach helps reduce costs and allows your IT team to focus on more important projects rather than routine maintenance.

  • Scalability: Easily adjust resources up or down based on business needs.
  • Cost Savings: Reduce hardware expenses and pay-as-you-go models.
  • Accessibility: Access data and applications from anywhere, promoting remote work and collaboration.
  • Security: Reputable cloud providers invest heavily in security measures, often exceeding what individual businesses can manage.

Adopting cloud services can streamline operations and provide a more agile infrastructure, allowing businesses to respond faster to market changes and customer demands.

AI and Data Strategies for Competitive Edge

In today’s world, data is everywhere, and Artificial Intelligence (AI) is the key to making sense of it all. By developing smart data strategies, you can turn raw information into actionable insights. This means understanding your customers better, predicting market trends, and making more informed decisions. AI can automate tasks, improve customer service through chatbots, and even help in product development. Having a clear AI and data strategy is no longer optional; it’s a requirement for staying competitive.

  • Predictive Analytics: Forecast future trends and customer behavior.
  • Personalization: Tailor products and services to individual customer needs.
  • Operational Efficiency: Automate repetitive tasks and optimize workflows.
  • Risk Management: Identify potential issues before they become major problems.

Exploring Emerging Technologies for New Revenue

Innovation often comes from looking beyond the current technology landscape. Exploring emerging technologies like the Internet of Things (IoT), blockchain, or advanced automation can open up entirely new avenues for your business. These technologies can lead to the creation of new products, services, or even entirely new business models. For instance, IoT devices can provide valuable data about product usage, leading to better service offerings. Blockchain can bring transparency and security to transactions. Staying curious and experimenting with these new tools can help you discover untapped markets and create fresh income streams. It’s about being proactive and ready to adapt to the next wave of technological change, much like how new vehicle rims can change a car’s performance and look. This forward-thinking approach is vital for long-term business health.

Cultivating an Innovation-Centric Culture

An organization’s culture is the bedrock upon which innovation is built. Without the right environment, even the most brilliant ideas can wither. It’s about creating a space where new thinking isn’t just welcomed, but actively encouraged and supported. This means looking at how people interact, how decisions are made, and how successes and failures are handled.

Encouraging Risk-Taking and Learning from Failure

Innovation inherently involves stepping into the unknown, which means there’s always a chance of not hitting the mark. A culture that punishes failure will quickly stifle creativity. Instead, we need to reframe setbacks not as dead ends, but as valuable learning opportunities. When an experiment doesn’t yield the expected results, the focus should be on understanding why. What did we learn? How can this inform our next steps? This mindset shift is critical for encouraging individuals and teams to propose bold ideas without the paralyzing fear of negative consequences. It’s about building resilience and a continuous improvement loop.

The journey of innovation is rarely a straight line. It’s a winding path filled with experiments, adjustments, and sometimes, outright surprises. Embracing this reality means creating a safe harbor for trying new things, even if they don’t pan out as planned. The insights gained from these explorations are often more valuable than a predictable, safe outcome.

Fostering Open Communication and Psychological Safety

For innovation to thrive, people need to feel comfortable sharing their thoughts, even if they seem unconventional. This requires a high degree of psychological safety – the belief that one can speak up without fear of embarrassment or punishment. Leaders play a key role here by actively listening, valuing diverse perspectives, and responding constructively to all contributions. Open channels of communication, whether through regular team meetings, suggestion boxes, or digital platforms, ensure that ideas can flow freely across all levels of the organization. This also means being transparent about the company’s goals and challenges, so everyone understands how their innovative contributions fit into the bigger picture. Exploring new technologies, like blockchain, can also be a low-risk way to understand cutting-edge advancements staying at the forefront of technological innovation.

Investing in Research and Development

Dedicated investment in Research and Development (R&D) is a clear signal that innovation is a priority. This isn’t just about funding big projects; it can also involve allocating time and resources for smaller experiments, training, and exploration.

Here are a few ways to approach R&D investment:

  • Dedicated Budgets: Allocate a specific portion of the company’s budget for R&D initiatives, ensuring consistent support.
  • Time Allocation: Allow employees dedicated time, perhaps a percentage of their work week, to explore new ideas or technologies outside their immediate project scope.
  • Skill Development: Invest in training programs and workshops to equip employees with the latest skills and knowledge relevant to emerging technologies and innovation methodologies.
  • Cross-Functional Projects: Fund projects that bring together individuals from different departments to encourage diverse thinking and problem-solving.

This commitment demonstrates a long-term vision and provides the necessary fuel for generating new products, services, and processes that can drive future growth.

Driving Innovation Through Strategic Partnerships

Sometimes, the best way to move forward is by working with others. Your business doesn’t have to go it alone when it comes to finding new ideas and making them happen. Partnering with outside groups can bring fresh perspectives and new capabilities that you might not have internally. It’s about building connections that help everyone grow.

Collaborating with External Organizations and Startups

Working with other companies, especially newer ones or startups, can be a great way to inject new energy into your innovation efforts. Startups often have cutting-edge ideas and agile ways of working that can be hard to replicate within a larger, established business. By collaborating, you can gain access to these new technologies or business models without having to build them from scratch yourself. This could involve joint development projects, licensing agreements, or even just sharing insights.

  • Access to novel technologies: Startups are often at the forefront of new tech.
  • Faster development cycles: Their lean structures can mean quicker progress.
  • Fresh market insights: They may have a different view of customer needs.
  • Reduced R&D risk: You share the burden of exploring new ideas.

Forming Strategic Alliances for Market Access

Strategic alliances are formal agreements between companies to work together towards a common goal. When it comes to innovation, these alliances can open doors to new markets or customer segments. Perhaps your product needs a distribution channel that another company already controls, or maybe your service can be bundled with theirs to create a more attractive offering. These partnerships are built on mutual benefit and a shared vision for growth.

Leveraging Mergers and Acquisitions for Innovation

Sometimes, the quickest way to bring in new innovation is through buying another company or merging with one. This approach can rapidly add new technologies, talent, and market share to your business. It’s a significant step, of course, and requires careful planning to make sure the integration is smooth and the innovative potential is realized. When done right, it can be a powerful way to accelerate your technology drive and gain a competitive advantage.

Mergers and acquisitions can be a direct route to acquiring innovative capabilities and market presence, but they demand thorough due diligence and integration planning to yield the desired results.

Measuring and Iterating Your Technology Drive

Gears and circuits glowing with light, symbolizing innovation.

After putting in the effort to drive innovation through technology, it’s important to check if it’s actually working. You can’t just set things in motion and hope for the best. We need to see what’s happening and make adjustments. This is where setting clear goals and keeping an eye on how things are going comes in. It’s not about finding fault; it’s about getting smarter.

Setting Clear Goals for Innovation Growth

Before you even start, you need to know what success looks like. What are you trying to achieve with this technology push? Is it about getting new products out faster, improving customer satisfaction, or finding new ways to make money? Having specific, measurable goals gives you something to aim for. Think about what you want to see change in your business and write it down. These goals should connect directly to what you want your technology drive to accomplish.

Implementing Key Performance Indicators for Success

Once you have your goals, you need ways to track your progress. These are your Key Performance Indicators, or KPIs. They are the numbers and metrics that tell you if you’re on the right track. For example, if your goal is faster product launches, a KPI might be the average time it takes from idea to market. If it’s about customer satisfaction, you might track customer feedback scores or repeat purchase rates. It’s helpful to have a mix of different types of KPIs.

Here are some examples of KPIs you might consider:

  • Time to Market: How long does it take to bring a new technology-driven product or feature to customers?
  • Customer Adoption Rate: What percentage of your target audience is using the new technology or service?
  • Revenue from New Products/Services: How much money are your new technology-based offerings generating?
  • Process Efficiency Gains: Are your new technologies making internal operations faster or cheaper?
  • Employee Engagement with New Tools: Are your teams actually using and benefiting from the new technology?

Analyzing Performance Data for Continuous Improvement

Looking at the numbers is only half the battle. The real value comes from understanding what the data tells you and then acting on it. Regularly review your KPIs. Are you hitting your targets? If yes, great! Can you push them higher? If no, why not? Was the goal unrealistic, or did the technology not perform as expected? Maybe the way you implemented it needs tweaking. This analysis helps you learn what works and what doesn’t, so you can make smart changes.

This ongoing cycle of measuring, analyzing, and adjusting is what keeps your innovation efforts sharp and effective. It’s not a one-time thing; it’s a continuous process that helps your business adapt and grow.

Think of it like tending a garden. You plant the seeds (your technology initiatives), you water them and give them sunlight (your efforts), but you also need to check on them, pull weeds, and maybe add fertilizer (analyze and adjust) to make sure they grow strong and produce fruit.

Moving Forward with Technology

So, we’ve talked a lot about how bringing new technology into your business can really make a difference. It’s not just about having the latest gadgets; it’s about using them smartly to solve problems, reach more customers, and work more efficiently. Think of it like upgrading your tools. The right tools, used well, can help you build bigger and better things, faster. It takes some planning, sure, and maybe a bit of learning, but the payoff in terms of growth and staying competitive is pretty significant. Keep exploring what’s out there, and don’t be afraid to try new approaches. Your business will thank you for it.

Frequently Asked Questions

What is a technology drive and why is it important for businesses?

A technology drive is like giving your business a boost by using new and better technology. In today’s world, things change really fast, and using the latest tech helps your company keep up, do things better, and even come up with new ideas. It’s important because it helps you stay competitive and grow.

How can a business make sure its technology drive is successful?

To make sure your tech drive works well, you need a clear plan. This means knowing what ‘new ideas’ mean for your company, having simple rules to follow for creating new things, and getting different kinds of people to work together. Having a good strategy is key to making innovation happen and helping your business grow.

What role does modernizing technology play in innovation?

Updating your technology, like moving to the cloud or using smart tools like AI, is super important. It makes your business run smoother and can help you find new ways to make money. Think of it as giving your business a modern engine that can go faster and farther.

How can a company encourage its employees to be more innovative?

You can create a workplace where people feel safe to try new things, even if they might not work out perfectly. This means encouraging them to take chances, learn from mistakes, and talk openly with each other. Investing time and money in trying out new ideas is also a big help.

Why are partnerships important for driving innovation?

Working with other companies, especially smaller, newer ones, can bring fresh ideas and new skills to your business. Partnering up can help you reach new customers or get access to cool new technology faster than you could on your own. Sometimes, even buying another company can bring in great innovations.

How do you know if your technology drive is actually working?

You need to set clear goals for what you want to achieve with your new technology and ideas. Then, you track how well you’re doing using specific measurements. By looking at this information regularly, you can see what’s working, what’s not, and make changes to keep improving.

The post Unlocking Innovation: How a Technology Drive Can Accelerate Your Business appeared first on IntelligentHQ.

Read more here: https://www.intelligenthq.com/technology-drive/

The post Unlocking Innovation: How a Technology Drive Can Accelerate Your Business appeared first on IPv6.net.

]]>
Async QUIC and HTTP/3 made easy: tokio-quiche is now open-source https://ipv6.net/news/async-quic-and-http-3-made-easy-tokio-quiche-is-now-open-source/ Fri, 07 Nov 2025 06:37:04 +0000 https://ipv6.net/?p=2886786 A little over 6 years ago, we presented quiche, our open source QUIC implementation written in Rust. Today we’re announcing the open sourcing of tokio-quiche, our battle-tested, asynchronous QUIC library combining both quiche and the Rust Tokio async runtime. Powering Cloudflare’s Proxy B in Apple iCloud Private Relay and our next-generation Oxy-based proxies, tokio-quiche handles […]

The post Async QUIC and HTTP/3 made easy: tokio-quiche is now open-source appeared first on IPv6.net.

]]>

A little over 6 years ago, we presented quiche, our open source QUIC implementation written in Rust. Today we’re announcing the open sourcing of tokio-quiche, our battle-tested, asynchronous QUIC library combining both quiche and the Rust Tokio async runtime. Powering Cloudflare’s Proxy B in Apple iCloud Private Relay and our next-generation Oxy-based proxies, tokio-quiche handles millions of HTTP/3 requests per second with low latency and high throughput. tokio-quiche also powers Cloudflare Warp’s MASQUE client, replacing our WireGuard tunnels with QUIC-based tunnels, and the async version of h3i.

quiche was developed as a sans-io library, meaning that it implements the state machine required to handle the QUIC transport protocol while not making any assumptions about how its user intends to perform IO. This means that, with enough elbow grease, anyone can write an IO integration with quiche! This entails connecting or listening on a UDP socket, managing sending and receiving UDP datagrams on that socket while feeding all network information to quiche. Given we need this integration to be async, we’d have to do all this while integrating with an async Rust runtime. tokio-quiche does all of that for you, no grease required.

Lowering the barrier to entry

Originally, tokio-quiche was only used as the core of Oxy’s HTTP/3 server. But the spark to create tokio-quiche as a standalone library was our need for a MASQUE-capable HTTP/3 client. Our Zero Trust and Privacy Teams need MASQUE clients to tunnel data through WARP and our Privacy Proxies respectively, and we wanted to use the same technology to build both the client and server.

We initially open-sourced quiche to share our memory-safe QUIC and HTTP/3 implementation with as many stakeholders as possible. Our focus at the time was a low-level, sans-io design that could integrate into many types of software and be deployed widely. We achieved this goal, with quiche deployed in many different clients and servers. However, integrating sans-io libraries into applications is an error-prone and time-consuming process. Our aim with tokio-quiche is to lower the barrier of entry by providing much of the needed code ourselves.

Cloudflare alone embracing HTTP/3 is not of much use if others wanting to interact with our products and systems don’t also adopt it. Open sourcing tokio-quiche makes integration with our systems more straightforward, and helps propel the industry into the new standard of HTTP. By contributing tokio-quiche back to the Rust ecosystem, we hope to promote the development and usage of HTTP/3, QUIC and new privacy preserving technologies.

tokio-quiche has been used internally for some years now. This gave us time to refine and battle-test it, demonstrating that it can handle millions of RPS. tokio-quiche is not intended to be a standalone HTTP/3 client or server, but implements low-level protocols and allows for higher-level projects in the future. The README contains examples of server and client client event loops.

It’s actors all the way down

Tokio is a wildly popular asynchronous Rust runtime. It efficiently manages, schedules and executes the billions of asynchronous tasks which run on our edge. We use Tokio extensively at Cloudflare, so we decided to tightly integrate quiche with it – thus the name, tokio-quiche. Under the hood, tokio-quiche uses actors to drive different parts of the QUIC and HTTP/3 state machine. Actors are small tasks with internal state that usually use message passing over channels to communicate with the outside world.

The actor model is a great abstraction to use for async-ifying sans-io libraries due to the conceptual similarities between the two. Both actors and sans-io libraries have some kind internal state which they want exclusive access to. They both usually interact with the outside world by sending and receiving  “messages”. quiche’s “messages” are really raw byte buffers which represent incoming and outgoing network data. One of tokio-quiche’s “messages” is the Incoming struct which describes incoming UDP packets. Due to these similarities, async-ifying a sans-io library means: awaiting new messages or IO, translating the messages or IO into something the sans-io library understands, advancing the internal state machine, translating the state machine’s output to a message or IO, and finally sending the message or IO. (For more discussion on actors with Tokio, make sure to take a look at Alice Rhyl’s excellent blog post on the topic.)

The primary actor in tokio-quiche is the IO loop actor, which moves packets between quiche and the socket. Since QUIC is a transport protocol, it can carry any application protocol you want. HTTP/3 is quite common, but DNS over QUIC and the upcoming Media over QUIC are other examples. There’s even an RFC to help you create your own QUIC application! tokio-quiche exposes the ApplicationOverQuic trait to abstract over application protocols. The trait abstracts over quiche’s methods and the underlying I/O, allowing you to focus on your application logic. For example, our HTTP/3 debug and test client, h3i, is powered by a client-focused, non-HTTP/3 ApplicationOverQuic implementation.


Server Architecture Diagram

tokio-quiche ships with an HTTP/3-focused ApplicationOverQuic called H3Driver. H3Driver hooks up quiche’s HTTP/3 module to this IO loop to provide the building blocks for an async HTTP/3 client or server. The driver turns quiche’s raw HTTP/3 events into higher-level events and asynchronous body data streams, allowing you to respond to them in kind. H3Driver is itself generic, exposing ServerH3Driver and ClientH3Driver variants that each stack additional behavior on top of the core driver’s events.


Internal Data Flow

Inside tokio-quiche, we spawn two important tasks that facilitate data movement from a socket to quiche. The first is the InboundPacketRouter, which owns the receiving half of the socket and routes inbound datagrams by their connection ID (DCID) to a per-connection channel. The second task, the IoWorker actor, is the aforementioned IO loop and drives a single quiche Connection. It intersperses quiche calls with ApplicationOverQuic methods, ensuring you can inspect the connection before and after any IO interaction.

More blog posts on the creation of tokio-quiche are coming soon. We’ll discuss actor models and mutexes, UDP GRO and GSO, tokio task coop budgeting, and more.

Next up: more on QUIC and beyond!

tokio-quiche is an important foundation for Cloudflare’s investment into the QUIC and HTTP/3 ecosystem for Tokio – but it is still only a building block with its own complexity. In the future, we plan to release the same easy-to-use HTTP client and server abstractions that power our Oxy proxies and WARP clients today. Stay tuned for more blog posts on QUIC and HTTP/3 at Cloudflare, including an open-source client for customers of our Privacy Proxies and a completely new service that’s handling millions of RPS with tokio-quiche!

For now, check out the tokio-quiche crate on crates.io and its source code on GitHub to build your very own QUIC application. Could be a simple echo server, a DNS-over-QUIC client, a custom VPN, or even a fully-fledged HTTP server. Maybe you will beat us to the punch?

Read more here: https://blog.cloudflare.com/async-quic-and-http-3-made-easy-tokio-quiche-is-now-open-source/

The post Async QUIC and HTTP/3 made easy: tokio-quiche is now open-source appeared first on IPv6.net.

]]>
Building a custom Arduino-controlled ‘analog’ dashboard https://ipv6.net/news/building-a-custom-arduino-controlled-analog-dashboard/ Thu, 06 Nov 2025 20:07:06 +0000 https://ipv6.net/?p=2886746 The folks over at Bad Obsession Motorsport in the UK specialize in unusual automotive builds, from a four-wheel drive Austin Mini to custom camper van called The Escargot. While working on that Mini, which they’ve dubbed “Binky,” they realized that they’d need create their own custom instrument cluster. But they wanted it to look appropriate […]

The post Building a custom Arduino-controlled ‘analog’ dashboard appeared first on IPv6.net.

]]>

The folks over at Bad Obsession Motorsport in the UK specialize in unusual automotive builds, from a four-wheel drive Austin Mini to custom camper van called The Escargot. While working on that Mini, which they’ve dubbed “Binky,” they realized that they’d need create their own custom instrument cluster. But they wanted it to look appropriate for the classic car, so they used Arduino boards to build this “analog” dashboard.

This started as a more modest project. The Bad Obsession Motorsport team just wanted to add LED lighting to existing gauges. Off-the-shelf LED controller solutions lacked the kind of brightness control they wanted and so they began experimenting with Arduino boards. They found that brightness control via PWM (pulse-width modulation) is trivial with an Arduino UNO Rev3 and so, emboldened by their success, they decided to expand the concept with new gauges.

Those new gauges indicate fuel level and coolant temperature. They have moving needles and look like they’re analog, but they’re actually digitally controlled by Arduino boards. Small hobby servo motors move the needles according to readings taken by a float sensor and a temperature sensor, gathered by an Arduino Nano Every board through its analog pins.

The Bad Obsession Motorsport team did have trouble with the lighting control and the servo control interfering with each other (likely an issue with either power draw or a code glitch), so they split those into two separate systems. One Nano controls the LEDs and a second Nano controls the gauge servos.

The final challenge was returning the servos to their “home” positions after turning the car off. Because all of the power comes from the car’s electrical system and that disconnects when the key is removed, they used capacitors to store a bit of extra juice. When the Arduino detects the power disconnect, it immediately moves the servos to their home positions. Then they’re ready to go the next time the car starts up.

The post Building a custom Arduino-controlled ‘analog’ dashboard appeared first on Arduino Blog.

Read more here: https://blog.arduino.cc/2025/11/06/building-a-custom-arduino-controlled-analog-dashboard/

The post Building a custom Arduino-controlled ‘analog’ dashboard appeared first on IPv6.net.

]]>
Aeris launches Episode 7 of their podcast ‘IoT Real Talk’ https://ipv6.net/news/aeris-launches-episode-7-of-their-podcast-iot-real-talk/ Thu, 06 Nov 2025 15:37:16 +0000 https://ipv6.net/?p=2886683 Aeris have launched the 7th episode of their newly launched podcast series ‘IoT Real Talk’. Aeris brings you behind the scenes of the global Internet of Things (IoT) landscape. Recorded The post Aeris launches Episode 7 of their podcast ‘IoT Real Talk’ appeared first on IoT Now News – How to run an IoT enabled […]

The post Aeris launches Episode 7 of their podcast ‘IoT Real Talk’ appeared first on IPv6.net.

]]>

Aeris have launched the 7th episode of their newly launched podcast series ‘IoT Real Talk’. Aeris brings you behind the scenes of the global Internet of Things (IoT) landscape. Recorded

The post Aeris launches Episode 7 of their podcast ‘IoT Real Talk’ appeared first on IoT Now News – How to run an IoT enabled business.

Read more here: https://www.iot-now.com/2025/11/06/153981-aeris-launches-episode-7-of-their-podcast-iot-real-talk/

The post Aeris launches Episode 7 of their podcast ‘IoT Real Talk’ appeared first on IPv6.net.

]]>
3D printing a stunning silver mechatronic beetle necklace https://ipv6.net/news/3d-printing-a-stunning-silver-mechatronic-beetle-necklace/ Thu, 06 Nov 2025 15:37:14 +0000 https://ipv6.net/?p=2886684 CNC milling is expensive, messy, and requires a lot of technical knowledge, which is why we all want the ability to 3D print metal. But while technologies like SLS metal printing do exist, they are generally out of reach of hobbyists. Fortunately, there is a secret third option: investment casting. Formlabs makes casting resins specifically […]

The post 3D printing a stunning silver mechatronic beetle necklace appeared first on IPv6.net.

]]>

CNC milling is expensive, messy, and requires a lot of technical knowledge, which is why we all want the ability to 3D print metal. But while technologies like SLS metal printing do exist, they are generally out of reach of hobbyists. Fortunately, there is a secret third option: investment casting. Formlabs makes casting resins specifically for that job, which Allie Katz and Mr. Volt used to create this stunning mechanic beetle necklace in silver.

Allie Katz and Mr. Volt made this at a hackathon that Formlabs hosted, so they were able to take advantage of the available printers, resins, and in-house casting expert. The necklace features a large silver beetle resting on a pair of leaves designed to lay across the wearer’s sternum. When the beetle detects heat, such as from a person approaching, it opens its wings to reveal a colorful array of LEDs like stained glass on its thorax, while it wiggles its antennae.

The leaves were printed in flexible resin, while the silver parts were printed in casting resin. The latter then went into a plaster investment casting mold and were filled with molten silver. After cooling, they were cleaned and carefully polished to a shine.

The beetle’s electronic functions operate under the control of an Arduino Nano 33 BLE board. It looks for heat signatures through a thermal camera module. Based on that input, it actuates the three tiny servo motors and the 35 individual RGB LEDs.

The result is beautiful and the beetle’s silver carapace really couldn’t be produced by automated means in any other way.

The post 3D printing a stunning silver mechatronic beetle necklace appeared first on Arduino Blog.

Read more here: https://blog.arduino.cc/2025/11/06/3d-printing-a-stunning-silver-mechatronic-beetle-necklace/

The post 3D printing a stunning silver mechatronic beetle necklace appeared first on IPv6.net.

]]>
Arviem and Tech Mahindra partner to deliver enhanced IoT and supply chain visibility solutions https://ipv6.net/news/arviem-and-tech-mahindra-partner-to-deliver-enhanced-iot-and-supply-chain-visibility-solutions/ Thu, 06 Nov 2025 13:07:05 +0000 https://ipv6.net/?p=2886646 Arviem, a global contributor in real-time cargo monitoring and supply chain visibility solutions, has announced a partnership with Tech Mahindra, a global provider of technology consulting and digital solutions to The post Arviem and Tech Mahindra partner to deliver enhanced IoT and supply chain visibility solutions appeared first on IoT Now News – How to […]

The post Arviem and Tech Mahindra partner to deliver enhanced IoT and supply chain visibility solutions appeared first on IPv6.net.

]]>

Arviem, a global contributor in real-time cargo monitoring and supply chain visibility solutions, has announced a partnership with Tech Mahindra, a global provider of technology consulting and digital solutions to

The post Arviem and Tech Mahindra partner to deliver enhanced IoT and supply chain visibility solutions appeared first on IoT Now News – How to run an IoT enabled business.

Read more here: https://www.iot-now.com/2025/11/06/153976-arviem-and-tech-mahindra-partner-to-deliver-enhanced-iot-and-supply-chain-visibility-solutions/

The post Arviem and Tech Mahindra partner to deliver enhanced IoT and supply chain visibility solutions appeared first on IPv6.net.

]]>
How Workers VPC Services connects to your regional private networks from anywhere in the world https://ipv6.net/news/how-workers-vpc-services-connects-to-your-regional-private-networks-from-anywhere-in-the-world/ Thu, 06 Nov 2025 08:37:04 +0000 https://ipv6.net/?p=2886607 In April, we shared our vision for a global virtual private cloud on Cloudflare, a way to unlock your applications from regionally constrained clouds and on-premise networks, enabling you to build truly cross-cloud applications. Today, we’re announcing the first milestone of our Workers VPC initiative: VPC Services. VPC Services allow you to connect to your […]

The post How Workers VPC Services connects to your regional private networks from anywhere in the world appeared first on IPv6.net.

]]>

In April, we shared our vision for a global virtual private cloud on Cloudflare, a way to unlock your applications from regionally constrained clouds and on-premise networks, enabling you to build truly cross-cloud applications.

Today, we’re announcing the first milestone of our Workers VPC initiative: VPC Services. VPC Services allow you to connect to your APIs, containers, virtual machines, serverless functions, databases and other services in regional private networks via Cloudflare Tunnels from your Workers running anywhere in the world. 

Once you set up a Tunnel in your desired network, you can register each service that you want to expose to Workers by configuring its host or IP address. Then, you can access the VPC Service as you would any other Workers service binding — Cloudflare’s network will automatically route to the VPC Service over Cloudflare’s network, regardless of where your Worker is executing:

export default {
  async fetch(request, env, ctx) {
    // Perform application logic in Workers here	

    // Call an external API running in a ECS in AWS when needed using the binding
    const response = await env.AWS_VPC_ECS_API.fetch("http://internal-host.com");

    // Additional application logic in Workers
    return new Response();
  },
};

Workers VPC is now available to everyone using Workers, at no additional cost during the beta, as is Cloudflare Tunnels. Try it out now. And read on to learn more about how it works under the hood.

Connecting the networks you trust, securely

Your applications span multiple networks, whether they are on-premise or in external clouds. But it’s been difficult to connect from Workers to your APIs and databases locked behind private networks. 

We have previously described how traditional virtual private clouds and networks entrench you into traditional clouds. While they provide you with workload isolation and security, traditional virtual private clouds make it difficult to build across clouds, access your own applications, and choose the right technology for your stack.

A significant part of the cloud lock-in is the inherent complexity of building secure, distributed workloads. VPC peering requires you to configure routing tables, security groups and network access-control lists, since it relies on networking across clouds to ensure connectivity. In many organizations, this means weeks of discussions and many teams involved to get approvals. This lock-in is also reflected in the solutions invented to wrangle this complexity: Each cloud provider has their own bespoke version of a “Private Link” to facilitate cross-network connectivity, further restricting you to that cloud and the vendors that have integrated with it.

With Workers VPC, we’re simplifying that dramatically. You set up your Cloudflare Tunnel once, with the necessary permissions to access your private network. Then, you can configure Workers VPC Services, with the tunnel and hostname (or IP address and port) of the service you want to expose to Workers. Any request made to that VPC Service will use this configuration to route to the given service within the network.

{
  "type": "http",
  "name": "vpc-service-name",
  "http_port": 80,
  "https_port": 443,
  "host": {
    "hostname": "internally-resolvable-hostname.com",
    "resolver_network": {
      "tunnel_id": "0191dce4-9ab4-7fce-b660-8e5dec5172da"
    }
  }
}

This ensures that, once represented as a Workers VPC Service, a service in your private network is secured in the same way other Cloudflare bindings are, using the Workers binding model. Let’s take a look at a simple VPC Service binding example:

{
  "name": "WORKER-NAME",
  "main": "./src/index.js",
  "vpc_services": [
    {
      "binding": "AWS_VPC2_ECS_API",
      "service_id": "5634563546"
    }
  ]
}

Like other Workers bindings, when you deploy a Worker project that tries to connect to a VPC Service, the access permissions are verified at deploy time to ensure that the Worker has access to the service in question. And once deployed, the Worker can use the VPC Service binding to make requests to that VPC Service — and only that service within the network. 

That’s significant: Instead of exposing the entire network to the Worker, only the specific VPC Service can be accessed by the Worker. This access is verified at deploy time to provide a more explicit and transparent service access control than traditional networks and access-control lists do.

This is a key factor in the design of Workers bindings: de facto security with simpler management and making Workers immune to Server-Side Request Forgery (SSRF) attacks. We’ve gone deep on the binding security model in the past, and it becomes that much more critical when accessing your private networks. 

Notably, the binding model is also important when considering what Workers are: scripts running on Cloudflare’s global network. They are not, in contrast to traditional clouds, individual machines with IP addresses, and do not exist within networks. Bindings provide secure access to other resources within your Cloudflare account – and the same applies to Workers VPC Services.

A peek under the hood

So how do VPC Services and their bindings route network requests from Workers anywhere on Cloudflare’s global network to regional networks using tunnels? Let’s look at the lifecycle of a sample HTTP Request made from a VPC Service’s dedicated fetch() request represented here:


It all starts in the Worker code, where the .fetch() function of the desired VPC Service is called with a standard JavaScript Request (as represented with Step 1). The Workers runtime will use a Cap’n Proto remote-procedure-call to send the original HTTP request alongside additional context, as it does for many other Workers bindings. 

The Binding Worker of the VPC Service System receives the HTTP request along with the binding context, in this case, the Service ID of the VPC Service being invoked. The Binding Worker will proxy this information to the Iris Service within an HTTP CONNECT connection, a standard pattern across Cloudflare’s bindings to place connection logic to Cloudflare’s edge services within Worker code rather than the Workers runtime itself (Step 2). 

The Iris Service is the main service for Workers VPC. Its responsibility is to accept requests for a VPC Service and route them to the network in which your VPC Service is located. It does this by integrating with Apollo, an internal service of Cloudflare One. Apollo provides a unified interface that abstracts away the complexity of securely connecting to networks and tunnels, across various layers of networking

To integrate with Apollo, Iris must complete two tasks. First, Iris will parse the VPC Service ID from the metadata and fetch the information of the tunnel associated with it from our configuration store. This includes the tunnel ID and type from the configuration store (Step 3), which is the information that Iris needs to send the original requests to the right tunnel.

Second, Iris will create the UDP datagrams containing DNS questions for the A and AAAA records of the VPC Service’s hostname. These datagrams will be sent first, via Apollo. Once DNS resolution is completed, the original request is sent along, with the resolved IP address and port (Step 4). That means that steps 4 through 7 happen in sequence twice for the first request: once for DNS resolution and a second time for the original HTTP Request. Subsequent requests benefit from Iris’ caching of DNS resolution information, minimizing request latency.

In Step 5, Apollo receives the metadata of the Cloudflare Tunnel that needs to be accessed, along with the DNS resolution UDP datagrams or the HTTP Request TCP packets. Using the tunnel ID, it determines which datacenter is connected to the Cloudflare Tunnel. This datacenter is in a region close to the Cloudflare Tunnel, and as such, Apollo will route the DNS resolution messages and the Original Request to the Tunnel Connector Service running in that datacenter (Step 5).


The Tunnel Connector Service is responsible for providing access to the Cloudflare Tunnel to the rest of Cloudflare’s network. It will relay the DNS resolution questions, and subsequently the original request to the tunnel over the QUIC protocol (Step 6).

Finally, the Cloudflare Tunnel will send the DNS resolution questions to the DNS resolver of the network it belongs to. It will then send the original HTTP Request from its own IP address to the destination IP and port (Step 7). The results of the request are then relayed all the way back to the original Worker, from the datacenter closest to the tunnel all the way to the original Cloudflare datacenter executing the Worker request.

What VPC Service allows you to build

This unlocks a whole new tranche of applications you can build on Cloudflare. For years, Workers have excelled at the edge, but they’ve largely been kept “outside” your core infrastructure. They could only call public endpoints, limiting their ability to interact with the most critical parts of your stack—like a private accounts API or an internal inventory database. Now, with VPC Services, Workers can securely access those private APIs, databases, and services, fundamentally changing what’s possible.


This immediately enables true cross-cloud applications that span Cloudflare Workers and any other cloud like AWS, GCP or Azure. We’ve seen many customers adopt this pattern over the course of our private beta, establishing private connectivity between their external clouds and Cloudflare Workers. We’ve even done so ourselves, connecting our Workers to Kubernetes services in our core datacenters to power the control plane APIs for many of our services. Now, you can build the same powerful, distributed architectures, using Workers for global scale while keeping stateful backends in the network you already trust.

It also means you can connect to your on-premise networks from Workers, allowing you to modernize legacy applications with the performance and infinite scale of Workers. More interesting still are some emerging use cases for developer workflows. We’ve seen developers run cloudflared on their laptops to connect a deployed Worker back to their local machine for real-time debugging. The full flexibility of Cloudflare Tunnels is now a programmable primitive accessible directly from your Worker, opening up a world of possibilities.

The path ahead of us

VPC Services is the first milestone within the larger Workers VPC initiative, but we’re just getting started. Our goal is to make connecting to any service and any network, anywhere in the world, a seamless part of the Workers experience. Here’s what we’re working on next:

Deeper network integration. Starting with Cloudflare Tunnels was a deliberate choice. It’s a highly available, flexible, and familiar solution, making it the perfect foundation to build upon. To provide more options for enterprise networking, we’re going to be adding support for standard IPsec tunnels, Cloudflare Network Interconnect (CNI), and AWS Transit Gateway, giving you and your teams more choices and potential optimizations. Crucially, these connections will also become truly bidirectional, allowing your private services to initiate connections back to Cloudflare resources such as pushing events to Queues or fetching from R2.

Expanded protocol and service support. The next step beyond HTTP is enabling access to TCP services. This will first be achieved by integrating with Hyperdrive. We’re evolving the previous Hyperdrive support for private databases to be simplified with VPC Services configuration, avoiding the need to add Cloudflare Access and manage security tokens. This creates a more native experience, complete with Hyperdrive’s powerful connection pooling. Following this, we will add broader support for raw TCP connections, unlocking direct connectivity to services like Redis caches and message queues from Workers ‘connect()’.

Ecosystem compatibility. We want to make connecting to a private service feel as natural as connecting to a public one. To do so, we will be providing a unique autogenerated hostname for each Workers VPC Service, similar to Hyperdrive’s connection strings. This will make it easier to use Workers VPC with existing libraries and object–relational mapping libraries that may require a hostname (e.g., in a global ‘fetch()’ call or a MongoDB connection string). Workers VPC Service hostname will automatically resolve and route to the correct VPC Service, just as the ‘fetch()’ command does.

Get started with Workers VPC

We’re excited to release Workers VPC Services into open beta today. We’ve spent months building out and testing our first milestone for Workers to private network access. And we’ve refined it further based on feedback from both internal teams and customers during the closed beta. 

Now, we’re looking forward to enabling everyone to build cross-cloud apps on Workers with Workers VPC, available for free during the open beta. With Workers VPC, you can bring your apps on private networks to region Earth, closer to your users and available to Workers across the globe.

Get started with Workers VPC Services for free now.

Read more here: https://blog.cloudflare.com/workers-vpc-open-beta/

The post How Workers VPC Services connects to your regional private networks from anywhere in the world appeared first on IPv6.net.

]]>