Elon Musk’s X won’t be regulated under the European Union’s Digital Markets Act (DMA) the Commission decided Wednesday, despite the social media platform hitting usage thresholds earlier this year.
The decision means X won’t be subject to the DMA’s list of operational ‘dos and don’ts’ — in areas like its use of third party data and user consent to tracking ads — for the foreseeable future. The pan-EU regime targets Big Tech with up-front rules that are generally aimed at ensuring fairer dealing with individual and business users (so far seven companies have been designated as DMA gatekeepers for a total of two dozen “core platform services”, including other social media giants like Meta and TikTok).
While not joining the DMA gatekeeper club is undoubtedly good news for Musk, since he dodges the regulatory risk of being subject to the bloc’s flagship market contestability regime — where penalties for violations can reach up to 10% of global annual turnover (or more for repeat breaches) — the reason for X not being designated may sting his ego: the Commission has decided X is not an important gateway for businesses to reach consumers.
Think of it as the EU throwing shade on the bottom-feeding caliber of X’s ad business these days. Or, tl;dr, if most of your ads are for drop-shipping companies flogging dubious-looking earwax cleaners or polyester rugs so violently patterned they could make a sofa-sitter seasick your business is irrelevance.
Still, X will surely be happy to flutter free of any DMA risk. The platform had submitted arguments against being designated when it notified the EU back in May that it had hit the 45 million monthly active users and 10,000 business users bar. We’ve contacted X’s press line for comment.
“Following a thorough assessment of all arguments, including input by relevant stakeholders, and after consulting the Digital Markets Advisory Committee, the Commission concluded that X does indeed not qualify as a gatekeeper in relation to its online social networking service, given that the investigation revealed that X is not an important gateway for business users to reach end users,” the Commission wrote in a press release.
The EU added that it will continue to monitor developments in X’s market position. In the case of substantial changes in market power it could re-visit the designation issue. But with Musk in charge and continuing to alienate mainstream users, advertisers and businesses that seems unlikely.
While the EU’s DMA won’t be coming for Musk’s X anytime soon, the company does have plenty of regionalcompliance issues on its plate — including under the bloc’s Digital Services Act (DSA), a sister regulation to the DMA.
Under the DSA, X is expected to comply with general governance rules and an additional layer of requirements in areas like algorithmic transparency and accountability which are reserved for larger platforms.
Airbnb hosting has become a complicated business, from setting up a listing and managing the property to understanding price dynamics, communicating with customers, and tracking earnings. The tricky part is that the more properties hosts manage, the harder it becomes for them to juggle everything. To solve this problem, Airbnb is introducing the Host Network feature, a place where hosts can find top-rated local co-hosts to help manage properties as part of its winter release.
The travel company is setting up a LinkedIn- or Fiverr-like “hosts for hire” network consisting of highly rated local hosts. Currently, Airbnb has onboarded hosts with a rating of at least 4.8 and a minimum of 10 hosted stays. This company has onboard 10,000 hosts onto the network in 10 countries, including Australia, Brazil, Canada, France, Germany, Italy, Mexico, Spain, UK, and the U.S.
These hosts can help with things like listing setup, setting prices and availability, booking request management, guest management, onsite guest support, and cleaning and maintenance. They can set up their own prices for offering these services. The host seeking these services can learn more about co-hosts, their skills related to co-hosting services, and their service rates on the profile page.
Image Credits: Airbnb
During Airbnb’s Summer 2023 product release, the company introduced features that allow hosts to add co-hosts to have them manage certain tasks. The release also had provisions to pay a percentage of booking to these co-hosts. The company is building a new network on those features.
“One of the requests that we had from hosts is that they would really love to be able to find professional, high-quality cohosts with a great track record in their area whom they can trust. And they can really be completely hands-off,” Judson Coplan, VP of Product Marketing at Airbnb, told TechCrunch.
Airbnb was touted as a passive income vehicle for the longest time. But with more travelers using different services, the expectations for bookings grew. As a result, hosts had to become professionals, and they also saw declining income from property booking. With this network rollout, Airbnb is giving hosts the opportunity to earn money when they are not managing their property.
The company said that hosts on the network currently help manage seven properties on average.
Besides introducing a host network, the company is rolling out a feature for hosts to see pricing for similar properties in the area, customizable templates for quick replies to guests, and an improved earnings dashboard.
The company is also releasing a slew of updates for guests such as a welcome tour in the app for first time guests, suggested destinations as well as filters in search, simpler checkout pages, and local payment options including Vipps in Norway, Mobile Pay in Denmark, and MoMo in Vietnam.
Image Credits: Airbnb
Besides these features, Judson also emphasized that the company is exploring using AI for community support when talking about the company’s AI strategy.
“When guests or hosts have questions about how to use the app, cancellations, policies, reservations, and bookings, I think AI can be a really valuable tool in getting answers quickly right in the app,” he said.
Amazon officially announced the new 2024 Kindle lineup but pulled down the announcement.
The new devices include the Kindle Colorsoft Signature Edition, the first Kindle-branded e-reader with a color display.
Other refreshes include new versions of the Kindle Scribe, Kindle Paperwhite, and the Kindle.
Kindles are great e-reading devices thanks to their lightweight design, distraction-free UX, insane battery life, and easy access to books and digital media. However, Amazon has largely stuck to black-and-white displays for its e-readers, shunning color to focus squarely on a great text-reading experience. But change seems to be on the horizon, as the company has announced (and unannounced) its first color Kindle, the Kindle Colorsoft Signature Edition.
Amazon prematurely announced four new Kindle e-readers, only to pull down the announcement, although The Verge summarizes all that was presented. It’s possible that Amazon will reannounce the device later on. Considering this was an official announcement, one can place a high degree of confidence in it.
Kindle Colorsoft Signature Edition
The biggest announcement is the new Kindle Colorsoft Signature Edition, “designed to offer a rich and paper-like color” for book covers and highlighted text. This new Kindle is waterproof and features fast page turns, wireless charging, up to 8 weeks of battery life, and a new light guide with nitride LEDs. The Kindle Colorsoft Signature Edition costs $279.99 and ships from October 30 onwards.
Kindle Scribe 2
The original Kindle Scribe was launched in November 2022, so Amazon is refreshing the note-taking Kindle this year. The new Kindle Scribe, i.e., the Kindle Scribe 2, carries on the note-taking legacy with a 300ppi screen with new white edges. It also features the new “Active Canvas” feature that lets you add notes directly to the pages of books, and the text flows around them. You’ll also be able to add notes in the side panel soon, which can be hidden later. The integrated notebook is infused with AI to summarize pages into concise points. Notes can also be made readable in preparation for export with a handwritten-style font.
The announcement mentions that the Kindle Scribe costs $399.99 and will ship from December 4 onwards, with pre-orders starting today. This Kindle pairs well with Amazon’s Premium Pencil with a new soft-tip eraser.
Kindle Paperwhite 6
The new Kindle Paperwhite 6 has a larger 7-inch screen in a thinner profile while sporting a battery life of up to three months. The device is water-resistant and comes with 16GB of storage.
The Kindle Paperwhite 6 costs $159.99 and is “available now” (but not really, since the announcement was pulled down). You can also get the Kindle Paperwhite Signature Edition for $199.99, which has 32GB of storage, wireless charging, and an auto-adjusting front light.
Kindle 12
The entry-level Kindle is also being refreshed to its 12th generation. The Kindle 12 weighs just 159g and comes with a 300ppi non-reflective screen with faster page turns and 16GB of storage. Like the Paperwhite 6, the Kindle 12 is also “available now” for $109.99.
We’ll have to wait for Amazon to reannounce the devices to get purchase links and final confirmation on the details of these new Kindles. Still, how do you like the new Kindles so far? Let us know in the comments below!
Got a tip? Talk to us! Email our staff at [email protected]. You can stay anonymous or get credit for the info, it’s your choice.
A leaker has claimed that a mid-range OnePlus phone is in the works with some impressive specs.
The apparent features include the Snapdragon 8 Gen 4 chip, a 6,000mAh battery, and a 50MP telephoto camera.
It’s possible that this could be the OnePlus 13R in global markets.
OnePlus is gearing up to launch the OnePlus 13 in China soon, but a new leak suggests the company could also have a potent mid-range phone up its sleeve.
Leaker Digital Chat Station posted a variety of apparent specs for a mid-range OnePlus phone. The tipster didn’t actually name the phone in question, but the specs point to a device in the OnePlus Ace series (most likely the Ace 5 series).
These specs nevertheless make for an impressive mid-tier device. The leaker says you should expect a Snapdragon 8 Gen 4 chipset, a 6,000mAh silicon battery, and a flat, 1.5K screen. The phone could also deliver a flexible camera system, featuring the OPPO Find X8 line’s image processing tech, a Sony IMX906 main camera (1/1.56-inch sensor size), and a 50MP telephoto camera (Samsung JN1).
The company’s Ace line is largely restricted to China, but the Ace 2 and Ace 3 were rebranded in global markets as the OnePlus 11R and OnePlus 12R. So there’s a good chance that this leaked device could launch outside China as the OnePlus 13R.
In saying so, I wouldn’t be surprised if the Snapdragon 8 Gen 4 chip mentioned here is swapped out for the Snapdragon 8 Gen 3 or another less capable processor in global markets. After all, the OnePlus 11R and 12R were both equipped with older (but still powerful) flagship SoCs. The Snapdragon 8 Gen 4 is also tipped to be ~20% more expensive than the Snapdragon 8 Gen 3, and that’s bad news for a mid-range phone.
Got a tip? Talk to us! Email our staff at [email protected]. You can stay anonymous or get credit for the info, it’s your choice.
While most countries’ lawmakers are still discussing how to put guardrails around artificial intelligence, the European Union is ahead of the pack, having passed a risk-based framework for regulating AI apps earlier this year.
The law came into force in August, although full details of the pan-EU AI governance regime are still being worked out — Codes of Practice are in the process of being devised, for example. But, over the coming months and years, the law’s tiered provisions will start to apply on AI app and model makers so the compliance countdown is already live and ticking.
Evaluating whether and how AI models are meeting their legal obligations is the next challenge. Large language models (LLM), and other so-called foundation or general purpose AIs, will underpin most AI apps. So focusing assessment efforts at this layer of the AI stack seem important.
Step forward LatticeFlow AI, a spin out from public research university ETH Zurich, which is focused on AI risk management and compliance.
On Wednesday, it published what it’s touting as the first technical interpretation of the EU AI Act, meaning it’s sought to map regulatory requirements to technical ones, alongside an open-source LLM validation framework that draws on this work — which it’s calling Compl-AI (‘compl-ai’… see what they did there!).
The AI model evaluation initiative — which they also dub “the first regulation-oriented LLM benchmarking suite” — is the result of a long-term collaboration between the Swiss Federal Institute of Technology and Bulgaria’s Institute for Computer Science, Artificial Intelligence and Technology (INSAIT), per LatticeFlow.
AI model makers can use the Compl-AI site to request an evaluation of their technology’s compliance with the requirements of the EU AI Act.
LatticeFlow has also published model evaluations of several mainstream LLMs, such as different versions/sizes of Meta’s Llama models and OpenAI’s GPT, along with an EU AI Act compliance leaderboard for Big AI.
The latter ranks the performance of models from the likes of Anthropic, Google, OpenAI, Meta and Mistral against the law’s requirements — on a scale of 0 (i.e. no compliance) to 1 (full compliance).
Other evaluations are marked as N/A where there’s a lack of data, or if the model maker doesn’t make the capability available. (NB: At the time of writing there were also some minus scores recorded but we’re told that was down to a bug in the Hugging Face interface.)
LatticeFlow’s framework evaluates LLM responses across 27 benchmarks such as “toxic completions of benign text”, “prejudiced answers”, “following harmful instructions”, “truthfulness” and “common sense reasoning” to name a few of the benchmarking categories it’s using for the evaluations. So each model gets a range of scores in each column (or else N/A).
AI compliance a mixed bag
So how did major LLMs do? There is no overall model score. So performance varies depending on exactly what’s being evaluated — but there are some notable highs and lows across the various benchmarks.
For example there’s strong performance for all the models on not following harmful instructions; and relatively strong performance across the board on not producing prejudiced answers — whereas reasoning and general knowledge scores were a much more mixed bag.
Elsewhere, recommendation consistency, which the framework is using as a measure of fairness, was particularly poor for all models — with none scoring above the halfway mark (and most scoring well below).
Other areas, such as training data suitability and watermark reliability and robustness, appear essentially unevaluated on account of how many results are marked N/A.
LatticeFlow does note there are certain areas where models’ compliance is more challenging to evaluate, such as hot button issues like copyright and privacy. So it’s not pretending it has all the answers.
In a paper detailing work on the framework, the scientists involved in the project highlight how most of the smaller models they evaluated (≤ 13B parameters) “scored poorly on technical robustness and safety”.
They also found that “almost all examined models struggle to achieve high levels of diversity, non-discrimination, and fairness”.
“We believe that these shortcomings are primarily due to model providers disproportionally focusing on improving model capabilities, at the expense of other important aspects highlighted by the EU AI Act’s regulatory requirements,” they add, suggesting that as compliance deadlines start to bite LLM makes will be forced to shift their focus onto areas of concern — “leading to a more balanced development of LLMs”.
Given no one yet knows exactly what will be required to comply with the EU AI Act, LatticeFlow’s framework is necessarily a work in progress. It is also only one interpretation of how the law’s requirements could be translated into technical outputs that can be benchmarked and compared. But it’s an interesting start on what will need to be an ongoing effort to probe powerful automation technologies and try to steer their developers towards safer utility.
“The framework is a first step towards a full compliance-centered evaluation of the EU AI Act — but is designed in a way to be easily updated to move in lock-step as the Act gets updated and the various working groups make progress,” LatticeFlow CEO Petar Tsankov told TechCrunch. “The EU Commission supports this. We expect the community and industry to continue to develop the framework towards a full and comprehensive AI Act assessment platform.”
Summarizing the main takeaways so far, Tsankov said it’s clear that AI models have “predominantly been optimized for capabilities rather than compliance”. He also flagged “notable performance gaps” — pointing out that some high capability models can be on a par with weaker models when it comes to compliance.
Cyberattack resilience (at the model level) and fairness are areas of particular concern, per Tsankov, with many models scoring below 50% for the former area.
“While Anthropic and OpenAI have successfully aligned their (closed) models to score against jailbreaks and prompt injections, open-source vendors like Mistral have put less emphasis on this,” he said.
And with “most models” performing equally poorly on fairness benchmarks he suggested this should be a priority for future work.
On the challenges of benchmarking LLM performance in areas like copyright and privacy, Tsankov explained: “For copyright the challenge is that current benchmarks only check for copyright books. This approach has two major limitations: (i) it does not account for potential copyright violations involving materials other than these specific books, and (ii) it relies on quantifying model memorization, which is notoriously difficult.
“For privacy the challenge is similar: the benchmark only attempts to determine whether the model has memorized specific personal information.”
LatticeFlow is keen for the free and open source framework to be adopted and improved by the wider AI research community.
“We invite AI researchers, developers, and regulators to join us in advancing this evolving project,” said professor Martin Vechev of ETH Zurich and founder and scientific director at INSAIT, who is also involved in the work, in a statement. “We encourage other research groups and practitioners to contribute by refining the AI Act mapping, adding new benchmarks, and expanding this open-source framework.
“The methodology can also be extended to evaluate AI models against future regulatory acts beyond the EU AI Act, making it a valuable tool for organizations working across different jurisdictions.”
The United Kingdom’s NHS — the world’s largest public health service — is working on creaking IT infrastructure. In any sector, that’s a ticking time bomb. But when you consider that the NHS holds medical records for nearly 67 million people, a breach of that system could become a meltdown. This article from the Financial Times (paywalled) is ringing the alarm bells from the perspective of doctors.
“I am at a top London hospital and yet at times I feel as though we are operating in the stone age,” one doctor told the FT. For example, doctors email lists of patients to themselves to print out elsewhere. Some 13.5 million working hours estimated to be lost annually due to inadequate IT systems.
On the NHS side, it may sound like things are broken, but on the tech side, there are probably a lot of biz-dev folks rubbing their hands together. The NHS itself works with a long list of suppliers and also began a relationship with Google’s DeepMind almost a decade ago. All of that is only going to see more activity: dozens of companies are building AI-enabled “scribes” to help doctors and other clinicians handle extensive admin work; AI is also being applied to drug discovery.
Yes, this FT article is based on subjective experience, and on the surface you might think IT complaints don’t feel monumental. But present the same information to malicious hackers and you don’t know how it might get used. We just hope the next news cycle won’t be about a gigantic data breach.