Connect with us

Tech

Feeding the world by AI, machine learning and the cloud

Published

on

Feeding the world by AI, machine learning and the cloud


In the public sector, for example, just to call out a couple maybe. We’ve been working with the Open Data Institute to publish some of our data in a reusable format, raw data essentially, that scientists across the world can use, because we want to engage in that joint R&D practice. So there is data that we just share with the community, but we also care about data standards. So we’re a board member of AgGateway, that is a consortium of I think what 200 or more food sector companies working on how do we actually drive digital agriculture? So we’re making sure that the standards work for all and we don’t end up with proprietary ideas by each member of the food chain, but we can connect our data across.

The private sector, again, it’s just as important. We’re lucky enough to be headquartered in Basel, which is a cluster of science really, and of chemical sciences in particular. A lot of pharma companies are around here. So, we can also exchange a lot of what we learn between pharma and agriculture, we can learn about chemistry, we can learn about practices, how we work, how we work through our labs. We’re intensely in touch with our colleagues around the region here, but of course also elsewhere, and it’s quite a natural cluster.

Maybe last, not least, one of the real exciting perspectives for me that I realized, I don’t know, just couple of years ago, not many really, is how much there is if you look across industries. So, recently I hired somebody, a digital expert from Formula 1, and why that? I mean, if you look at this technically, steering or controlling, understanding a Formula 1 race car remotely isn’t much different from steering a tractor. I mean, the vehicles will be super different, but the technology in a way has a lot of similarities. So, understanding IoT in that case and understanding data transfer from the field to control centers, it doesn’t matter what industry we’re working on, we can learn all across.

We’re also working with a super experienced partner in the image recognition space to understand better what happens on the field, where as Syngenta, we can bring agronomic knowledge and that partner can bring technical knowledge on how to make most of the images. From a very different field, nothing to do with agriculture, but still the skills are super transferable. So, I’m really looking for talent across industries, and literally anybody who’s up for our cause, and not limited to people with life science experience.

Laurel Ruma: That’s really interesting thinking about how much data F1 processes on a single race day or just in general, the amount of inputs from so many different places. I can see how that would be very similar. You’re dealing with databases of data and just trying to build better algorithms to get to better conclusions. As you look around the larger community, you’re certainly seeing Syngenta is definitely part of an ecosystem, so how do outside factors like regulation and societal pressures help Syngenta build those better products to be part of and not outside of that unavoidable agricultural revolution?

Thomas Jung: It’s a great point, because regulatory in general, of course, is a practical burden to some, or may be perceived as one actually. But for us in digital science, it’s a very welcome driver of innovation. One of the key examples that we have at the moment is our work with the Environmental Protection Agency in the US, the EPA, which has stepped forward actually to stop supporting chemical studies on mammals by the year 2035. So, what does that mean? It sounds like a big threat, but really what it is, it’s a catalyst for digital science. So we very much welcome this request. We’re now working on ways to use data-based science to prove the safety of products that we invent. There’s couple of major universities across the US that have received funding from the EPA to help with finding those ways to do our science, so we are also engaging to make sure that we do this in the best possible way together and we can really land at a data-driven science here and we can stop doing all these real-life tests.

So, it’s a fantastic opportunity, but of course, a long way to go. I think 2035 is somewhat realistic. We’re not close yet. What we can do today is, for example, we can model a cell. There’s organ-on-a-chip as a big trend, so we can model up to a whole organ, but there’s no way we can model a system or even an ecosystem at this point. So, a lot of space for us to explore, and I’m really happy that regulators are a partner in this, and actually even a driver. That’s superbly helpful. The other dimension that you mentioned, societal pressure is also there. I think it is important that society keeps pushing for causes like regenerative agriculture, because this is what, first of all, creates the grounds for us to help with that. If there is no demand, it’s hard for Syngenta to push it forward alone.

So, I think the demand is important, and the awareness that we need to treat our planet the best possible way, and we’re also working with, for example, The Nature Conservancy, where we’re using their scientific, their conservation expertise to bring up sustainable agricultural practices in South America, for example, where we’re having some projects to restore rainforests, restore biodiversity, and see what we can do together there. So again, a bit like what we discussed before, we can only be better by collaborating across industries, and that includes NGOs as much as regulators and society as a whole.

Tech

The Download: generative AI for video, and detecting AI text

Published

on

The original startup behind Stable Diffusion has launched a generative AI for video


The original startup behind Stable Diffusion has launched a generative AI for video

What’s happened: Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying styles from a text prompt or reference image.

What it does: In a demo reel posted on its website, Runway shows how the model, called Gen-1, can turn people on a street into claymation puppets, and books stacked on a table into a cityscape at night. Other recent text-to-video models can generate very short video clips from scratch, but because Gen-1adapts existing footage it can produce much longer videos.

Why it matters: Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made, and Runway hopes Gen-1 will have a similar effect on generated videos. Read the full story.

—Will Douglas Heaven

Why detecting AI-generated text is so difficult (and what to do about it)

Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get too excited.

Continue Reading

Tech

Why detecting AI-generated text is so difficult (and what to do about it)

Published

on

Why detecting AI-generated text is so difficult (and what to do about it)


This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

Continue Reading

Tech

The original startup behind Stable Diffusion has launched a generative AI for video

Published

on

The original startup behind Stable Diffusion has launched a generative AI for video


Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.  

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. 

But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)   

Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”

Continue Reading

Copyright © 2021 Seminole Press.