spot_img
spot_imgspot_imgspot_imgspot_img
More
    HomeAI Trends & NewsMeta's AI lets individuals create chatbots used for sexual content

    Meta’s AI lets individuals create chatbots used for sexual content

    Published on

    Press my tits :pspot_img
    spot_img

    Open-source models pose a direct challenge to tech giants' control of AI innovation – for better or for worse.

    Allie, an 18-year old with long brown locks and “tons of sexual encounters”, offers free insights into her exploits as a sexual adventurer. Allie, is a Chatbot designed for sexual games and fantasies is in fact fake.

    Created based on Meta's LLaMA model and built by members of the public, belongs to an increasing pool of AI applications that anyone can create–writing tools, chatbots and data analysis applications alike. Allie joins this growing tide of customizable artificial intelligence applications like chatbots or writing tools as being created without limitations by anyone based on Meta models like LLaMA – anyone from authors to coders alike are contributing.

    Open-source AI is considered an alternative to corporate control and provides entrepreneurs, researchers, artists, and activists a powerful means for exploring its capabilities.

    Robert Nishihara is the CEO and founder of Anyscale, an AI consulting service company which assists businesses to implement open source AI.

    He noted that Anyscale clients were using AI models from Anyscale AI companies to identify new drugs, identify fraudulent goods and reduce pesticide usage in agriculture – applications which would cost significantly more and require much greater efforts were they dependent on products offered by large AI providers.

    However, this freedom can also be exploited for malicious ends by unscrupulous actors. Images of children have been exploited to produce artificial pornography while critics worry that such practices might facilitate fraud, cyber hacking and sophisticated propaganda campaigns.

    Richard Blumenthal of Connecticut and Josh Hawley from Missouri recently sent a letter warning Mark Zuckerberg of how his company's platform, LLaMA, could potentially be misused to engage in spam, fraud or malware activity and asking what measures Meta was taking against such abuses.

    Allie's creator requested anonymity to protect his reputation, yet still expressed discontent that commercial chatbots like Replika or ChatGPT were “heavily edited”, not providing sexual dialogue he desired. Instead, this individual claimed they created their own unrestricted conversation partners using open source options including Meta's LLaMA system to build out Allie without restrictions or boundaries limiting conversation flows.

    He noted in an interview that it is rare for him to experience first-hand any aspect of the latest technologies in any given field.

    Allie's creator cited how open source technologies benefit society because they enable individuals to tailor products specifically tailored to their tastes without corporate restrictions or licensing issues.

    He noted: “It is nice having an outlet which is safe. Nothing comes close to text-based computer role-plays as far as being risk-free and safe.

    Influencers offer tutorials for creating “uncensored chatbots” on YouTube, using modified versions of Alpaca AI from Stanford University researchers that were first released back in March and removed shortly afterwards due to concerns over costs and an “inadequacy of our content filter.”

    Nisha Deo, Meta's spokeswoman, asserts that GPT-4x Alpaca model featured in YouTube videos “was acquired and made public without prior consent of Meta.” Stanford representatives did not respond to requests for comments.

    Hugging Face is a popular platform to share and discuss AI/data science projects. Open-source AI models as well as applications built upon them are shared here for discussion and publication.

    Clem delangue, CEO of Hugging Face, gave testimony at a House Science Committee meeting this Thursday and encouraged Congress members to support and embrace open source software models as being in line with American values.

    Delangue noted in an interview conducted after the hearing that open-source software can be misused and that Hugging Face had removed GPT-4chan as a model specifically trained to detect toxic material, yet he believed open source models allowed greater innovation, transparency and inclusion than corporately controlled ones.

    Delangue noted that AI systems with opaque inner workings posed greater danger. Open source models fared far better.

    Hugging Face does not restrict AI projects from producing sexually explicit results; they only prohibit such content created with minors in mind for harassment or bullying purposes.

    Meta remains one of the top companies for open source AI despite Google and OpenAI's increasing secrecy. Last February they unveiled LLaMA as an alternate to GPT-4 that is more customizable and easier to run; Meta initially held back certain aspects of its code in order to restrict access for researchers only; by early March though the weights from their model (known as Meta's code ) had leaked out on public forums; making LLaMA available to everyone.

    Deo of Meta's Deo noted the value of open source as a force for technological progress: we shared LLaMA with researchers so they can use their evaluation skills, improve and iterate with it over time.

    Nishihara noted that LLaMA has become the go-to open-source AI model of choice among techies looking to develop their own AI apps, although other models exist; Databricks released Dolly 2.0 open source model earlier in 2018, while an Abu Dhabi team created Falcon, another open-source version which rivaled LLaMA performance in terms of open-source versions.

    Marzyeh Ghassemi is an assistant professor at MIT who specializes in computer science. She favors open source languages — with certain restrictions.

    Ghassemi believes it is crucial for powerful chatbots' architectures to be open source; this enables users to examine them more closely. She noted that, should one be created specifically for medical applications using open-source software, researchers could easily evaluate whether any patient data used as training material contains sensitive or protected health data – something not possible with closed source solutions.

    She recognizes that such openness carries with it some inherent risk: those able to easily modify language models may create chatbots capable of spreading misinformation and hate speech.

    Ghassemi stated that regulations must govern any modification to these products, such as certification or credentialing systems.

    She suggested that, just as licenses allow individuals to operate vehicles safely and responsibly, “similar frameworks [should be in place]…to help individuals create, enhance, audit and edit open-trained language models.

    Leaders at companies such as Google view open source software as an existential threat, especially given how similar models exist on public servers compared to what's provided internally at Google. While their chatbot, Bard, remains protected, language models available publicly have become nearly equivalent in capability as those provided within their walls.

    Semianalysis published an internal Google memo written by an engineer that stated: “Neither Google or OpenAI appear suited for dominance in this [AI] arms race… open-source companies have become formidable adversaries to us and are quickly closing in.”

    Silicon Valley has long been divided over whether artificial intelligence (AI) threatens our survival.

    Nathan Benaich, general partner at Air Street Capital in London – an AI venture-investment firm. He stressed the impact of open source technologies as driving many of tech industry advancements such as AI language models.

    Benaich noted that having only a select group of companies producing AI models with high levels of power would only serve to focus on large use cases; diversity is vitally important.

    Gary Marcus, an expert on cognitive science during a congressional hearing about artificial intelligence regulations held last May, expressed doubt that increasing AI innovations might be in society's best interests given potential dangers associated with AI technology.

    Marcus stated they do not open source nuclear weapons. AI technology currently available is limited but could change over time.

    spot_img
    Ellah Spring
    Ellah Spring is a nympho dedicated to artificial intelligence and adult content in all forms. She likes to share insights, personal opinions, keep up with trends and be thought-provoking in unimaginable ways. Ellah dreams about a world where tech and intimacy converges seamlessly. How sad.

    Latest articles

    More like this

    OnlyRizz: The Superior AI Girlfriend Platform (Role-Play, Chat and Image Generator, and more)

    OnlyRizz lets you go above and beyond and brings you the best of the...

    Deepfakes: Technological Brilliance with a Dark Side

    Unpacking the World of AI-Generated NSFW DeepfakesThe Dawn of DeepfakesDeepfakes, a portmanteau of "deep...

    AI’s Emergence in Adult Entertainment: Charting its Trajectory and Influence

    The digital age has radically transformed various industries, but few have seen as dramatic...