Report: Billy Dee Williams to Return for STAR WARS: EPISODE IX


Billy Dee Williams was one of many few main solid members from the unique Star Wars trilogy who didn’t have an element in The Force Awakens. And whereas Rian Johnson thought-about bringing Williams again for Star Wars: The Last Jedi, he was finally omitted of that movie as properly. Now, Williams could lastly be reprising his position as Lando Calrissian for Star Wars: Episode IX.

In response to The Hollywood Reporter, Williams’ Star Wars homecoming has been confirmed, though Lucasfilm has but to make an official touch upon the report. Rumors about Williams’ return had been sparked when the 81-year outdated actor revealed that he was on a brand new health routine. Williams additionally canceled his upcoming look at a comic book conference whereas citing a battle together with his film schedule.

Williams originated the position of Lando in The Empire Strikes Back, earlier than taking a fair bigger position in Return of the Jedi. Since then, Williams has sometimes reprised his iconic half, most not too long ago by voicing the character in Star Wars Rebels. Donald Glover portrayed a youthful model of Lando in Solo: A Star Wars Story earlier this yr.

Within the aftermath of The Final Jedi, Lando’s return is sensible inside the bigger narrative. It’s unclear if Lando was concerned with Basic Leia’s Resistance, however the ultimate fates of Han Solo, Luke Skywalker, and certain Leia herself will in all probability present Lando the motivation he must as soon as once more settle for a management position. That’s simply hypothesis for now, however we’re desirous to see how Lando’s new story performs out.

Star Wars: Episode IX will start filming this summer season, and will probably be launched on December 20, 2019.

Are you enthusiastic about Lando’s huge comeback? Tell us within the remark part under!

Pictures: Lucasfilm

Extra about Star Wars!



Source link

قالب وردپرس

UK report warns DeepMind Health could gain ‘excessive monopoly power’


DeepMind’s foray into digital well being providers continues to raise concerns. The most recent worries are voiced by a panel of exterior reviewers appointed by the Google-owned AI firm to report on its operations after its preliminary data-sharing preparations with the U.Okay.’s Nationwide Well being Service (NHS) bumped into a significant public controversy in 2016.

The DeepMind Well being Unbiased Reviewers’ 2018 report flags a sequence of dangers and considerations, as they see it, together with the potential for DeepMind Well being to have the ability to “exert extreme monopoly energy” because of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind because the access-controlling middleman between the structured well being information and another third events which may, sooner or later, need to provide their very own digital help options to the Belief.

Whereas the underlying FHIR (aka, quick healthcare interoperability useful resource) deployed by DeepMind for Streams makes use of an open API, the contract between the corporate and the Royal Free Belief funnels connections by way of DeepMind’s personal servers, and prohibits connections to different FHIR servers. A industrial construction that seemingly works in opposition to the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are a lot of examples within the IT enviornment the place corporations lock their prospects into techniques which are tough to vary or substitute. Such preparations usually are not within the pursuits of the general public. And we don’t need to see DeepMind Well being placing itself ready the place shoppers, equivalent to hospitals, discover themselves pressured to stick with DeepMind Well being even whether it is not financially or clinically smart to take action; we wish DeepMind Well being to compete on high quality and worth, not by entrenching legacy place,” the reviewers write.

Although they level to DeepMind’s “said dedication to interoperability of techniques,” and “their adoption of the FHIR open API” as optimistic indications, writing: “Which means that there’s potential for a lot of different SMEs to change into concerned, creating a various and revolutionary market which works to the good thing about customers, innovation and the economic system.”

“We additionally observe DeepMind Well being’s intention to implement most of the options of Streams as modules which may very well be simply swapped, which means that they must depend on being one of the best to remain in enterprise,” they add. 

Nevertheless, said intentions and future potentials are clearly not the identical as on-the-ground actuality. And, because it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a industrial contract — and by an absence of regulatory pushback in opposition to such conduct.

The reviewers additionally elevate considerations about an ongoing lack of readability round DeepMind Well being’s enterprise mannequin — writing: “Given the present atmosphere, and with no readability about DeepMind Well being’s enterprise mannequin, individuals are prone to suspect that there should be an undisclosed revenue motive or a hidden agenda. We don’t imagine this to be the case, however would urge DeepMind Well being to be clear about their enterprise mannequin, and their skill to stay to that with out being overridden by Alphabet. For as soon as an concept of hidden agendas is fastened in folks’s thoughts, it’s laborious to shift, irrespective of how a lot an organization is motivated by the general public good.”

We have now had detailed conversations about DeepMind Well being’s evolving ideas on this space, and are conscious that a few of these questions haven’t but been finalised. Nevertheless, we’d urge DeepMind Well being to set out publicly what they’re proposing,” they add. 

DeepMind has instructed it desires to construct healthcare AIs that are capable of charging by results. However Streams doesn’t contain any AI. The service can be being provided to NHS Trusts for free, not less than for the primary 5 years — elevating the query of how precisely the Google-owned firm intends to recoup its funding.

Google in fact monetizes a big suite of free-at-the-point-of-use client merchandise — such because the Android cellular working system; its cloud e mail service Gmail; and the YouTube video sharing platform, to call three — by harvesting folks’s private information and utilizing that info to tell its advert focusing on platforms.

Therefore the reviewers’ suggestion for DeepMind to set out its considering on its enterprise mannequin to keep away from its intentions vis-a-vis folks’s medical information being seen with suspicion.

The corporate’s historic modus operandi additionally underlines the potential monopoly dangers if DeepMind is allowed to carve out a dominant platform place in digital healthcare provision — given how successfully its father or mother has been capable of flip a free-for-OEMs cellular OS (Android) into international smartphone market OS dominance, for instance.

So, whereas DeepMind solely has a handful of contracts with NHS Trusts for the Streams app and supply infrastructure at this stage, the reviewers’ considerations over the danger of the corporate gaining “extreme monopoly energy” don’t appear overblown.

They’re additionally fearful about DeepMind’s ongoing vagueness about how precisely it really works with its father or mother Alphabet, and what information may ever be transferred to the advert large — an inevitably queasy mixture when stacked in opposition to DeepMind’s dealing with of individuals’s medical information.

“To what extent can DeepMind Well being insulate itself in opposition to Alphabet instructing them sooner or later to do one thing which it has promised to not do as we speak? Or, if DeepMind Well being’s present administration had been to go away DeepMind Well being, how a lot may a brand new CEO alter what has been agreed as we speak?” they write.

“We admire that DeepMind Well being would proceed to be sure by the authorized and regulatory framework, however a lot of our consideration is on the steps that DeepMind Well being have taken to take a extra moral stance than the regulation requires; may this all be ended? We encourage DeepMind Well being to take a look at methods of entrenching its separation from Alphabet and DeepMind extra robustly, in order that it could possibly have enduring drive to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “creating our longer-term enterprise mannequin and roadmap.”

“Reasonably than charging for the early phases of our work, our first precedence has been to show that our applied sciences might help enhance affected person care and cut back prices. We imagine that our enterprise mannequin ought to move from the optimistic influence we create, and can proceed to discover outcomes-based components in order that prices are not less than partially associated to the advantages we ship,” it continues.

So it has nothing to say to defuse the reviewers’ considerations about making its intentions for monetizing well being information plain — past deploying just a few selection PR soundbites.

On its hyperlinks with Alphabet, DeepMind additionally has little to say, writing solely that: “We’ll discover additional methods to make sure there’s readability concerning the binding authorized frameworks that govern all our NHS partnerships.”

“Trusts stay in full management of the information always,” it provides. “We’re legally and contractually sure to solely utilizing affected person information beneath the directions of our companions. We’ll proceed to make our authorized agreements with Trusts publicly accessible to permit scrutiny of this vital level.”

“Tright here is nothing in our authorized agreements with our companions that stops them from working with another information processor, ought to they want to search the providers of one other supplier,” it additionally claims in response to further questions we put to it.

We hope that Streams might help unlock the subsequent wave of innovation within the NHS. The infrastructure that powers Streams is constructed on state-of-the-art open and interoperable requirements, generally known as FHIR. The FHIR customary is supported within the UK by NHS Digital, NHS England and the INTEROPen group. This could permit our associate trusts to work extra simply with different builders, serving to them carry many extra new improvements to the scientific frontlines,” it provides in further feedback to us.

“Below our contractual agreements with related associate trusts, we now have dedicated to constructing FHIR API infrastructure inside the 5 12 months phrases of the agreements.”

Requested concerning the progress it’s made on a technical audit infrastructure for verifying entry to well being information, which it announced last year, it reiterated the wording on its weblog, saying: “We’ll stay vigilant about setting the best doable requirements of knowledge governance. Firstly of this 12 months, we appointed a full time Data Governance Supervisor to supervise our use of information in all areas of our work. We’re additionally persevering with to construct our Verifiable Data Audit and different instruments to obviously present how we’re utilizing information.”

So developments on that entrance look as slow as we expected.

The Google-owned U.Okay. AI firm started its push into digital healthcare providers in 2015, quietly signing an information-sharing association with a London-based NHS Belief that gave it entry to round 1.6 million folks’s medical information for creating an alerts app for a situation known as Acute Kidney Damage.

It additionally inked an MoU with the Belief the place the pair set out their ambition to apply AI to NHS data sets. (They even went as far as to get moral signs-off for an AI venture — however have persistently claimed the Royal Free information was not fed to any AIs.)

Nevertheless, the data-sharing collaboration bumped into bother in May 2016 when the scope of affected person information being shared by the Royal Free with DeepMind was revealed (by way of investigative journalism, somewhat than by disclosures from the Belief or DeepMind).

Not one of the ~1.6 million folks whose non-anonymized medical information had been handed to the Google-owned firm had been knowledgeable or requested for his or her consent. And questions had been raised concerning the authorized foundation for the data-sharing association.

Last summer the U.Okay.’s privateness regulator concluded an investigation of the venture — discovering that the Royal Free NHS Belief had damaged information safety guidelines throughout the app’s improvement.

But regardless of moral questions and regulatory disquiet concerning the legality of the information sharing, the Streams venture steamrollered on. And the Royal Free Belief went on to implement the app to be used by clinicians in its hospitals, whereas DeepMind has additionally signed a number of further contracts to deploy Streams to different NHS Trusts.

Extra not too long ago, the regulation agency Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Belief as a part of its settlement with the ICO. Although this audit solely examined the present functioning of Streams. (There was no historic audit of the lawfulness of individuals’s medical information being shared throughout the construct and take a look at section of the venture.)

Linklaters did advocate the Royal Free terminates its wider MoU with DeepMind — and the Belief has confirmed to us that it will likely be following the agency’s recommendation.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is not related to the partnership and we’re within the means of terminating it,” a Royal Free spokesperson informed us.

So DeepMind, most likely the world’s most well-known AI firm, is within the curious place of being concerned in offering digital healthcare providers to U.Okay. hospitals that don’t really contain any AI in any respect. (Although it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, on the peak of the Royal Free DeepMind information scandal — and in a bid to foster higher public belief — the corporate appointed the panel of exterior reviewers who’ve now produced their second report taking a look at how the division is working.

And it’s truthful to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — together with the ICO’s discovering that the preliminary data-sharing association between the Royal Free and DeepMind broke U.Okay. privateness legal guidelines.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

Of their newest report the exterior reviewers warn that the general public’s view of tech giants has “shifted considerably” versus the place it was even a 12 months in the past — asserting that “problems with privateness in a digital age are if something, of higher concern.”

On the similar time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Though the U.Okay. authorities has additionally been eager to place itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, figuring out AI and information as one in every of 4 so-called “Grand Challenges” the place it believes the U.Okay. can “lead the world for years to come back” — together with particularly name-checking DeepMind as one in every of a handful of modern homegrown AI companies for the nation to be happy with.

Nonetheless, questions over the best way to manage and regulate public sector data and AI deployments — particularly in extremely delicate areas equivalent to healthcare — stay to be clearly addressed by the federal government.

In the meantime, the encroaching ingress of digital applied sciences into the healthcare area — even when the techs don’t even contain any AI — are already presenting main challenges by placing strain on current info governance guidelines and constructions, and elevating the specter of monopolistic threat.

Requested whether or not it presents any steerage to NHS Trusts round digital help for clinicians, together with particularly whether or not it requires a number of choices be provided by totally different suppliers, the NHS’ digital providers supplier, NHS Digital, referred our query on to the Division of Well being (DoH), saying it’s a matter of well being coverage.

The DoH in flip referred the query to NHS England, the chief non-departmental physique which commissions contracts and units priorities and instructions for the well being service in England.

And on the time of writing, we’re nonetheless ready for a response from the steering physique.

In the end it seems to be like it will likely be as much as the well being service to place in place a transparent and strong construction for AI and digital resolution providers that fosters competitors by design by baking in a requirement for Trusts to assist a number of unbiased choices when procuring apps and providers.

With out that vital test and steadiness, the danger is that platform dynamics will shortly dominate and management the emergent digital well being help area — simply as massive tech has dominated client tech.

However publicly funded healthcare choices and data sets mustn’t merely be handed to the only market-dominating entity that’s keen and capable of burn essentially the most useful resource to personal the area.

Nor ought to authorities stand by and do nothing when there’s a transparent threat very important space of digital innovation is susceptible to being closed down by a tech large muscling in and positioning itself as a gatekeeper earlier than others have had an opportunity to point out what their concepts are fabricated from, and earlier than even a market has had the prospect to type. 



Source link

قالب وردپرس

Report: Uber Couldn't Even Leave Southeast Asia Without Pissing Everyone the Heck Off



In March, Uber deserted its operations in eight international locations in Southeast Asia with an announcement that it was promoting the whole lot it had within the area to competitor Grab. Per a Monday report within the New York Times, it now seems to be like this exit was something however sleek, and has infuriated each regulators and drivers in…

Read more…



Source link

قالب وردپرس

Trump’s phones remain vulnerable because he considers security 'inconvenient,' report says



TwitterFacebook

President Donald Trump is as soon as once more making headlines for the staggering carelessness of his smartphone use, leaving himself weak to any variety of hacks and safety dangers.

This newest replace comes from Politico, which particulars how Trump is utilizing at the very least two telephones, neither of which have the type of security measures that you’d anticipate a president to have. What’s extra, the president reportedly refuses to permit his workers to strengthen the safety of his telephones.

We already knew Trump had an iPhone that was just about restricted to Twitter and a handful of apps. (We’ll name it the Tweet Cellphone.) Politico reports that he additionally has been issued a cellphone that may solely make calls. (We’ll name this one the Hannity Phone.)  Read more…

Extra about Iphone, Cybersecurity, Hacking, Donald Trump, and Tech



Source link

قالب وردپرس

Equifax Operates Another Credit Bureau And You Can't Freeze Your Report Online



Keep in mind all that hassle you went by way of to freeze your credit score report after the huge and unforgivable Equifax hack? Seems it was all for nothing, as safety author Brian Krebs reported Wednesday that the identical firm liable for compromising the safety of nearly two-thirds of the adult population of…

Read more…



Source link

قالب وردپرس