The following is a prompt and response. I think you should be able to tell what is and what is not LLM generated content but I have labeled the prompt and the final response. You could probably apply this method to create any deep fake you wanted.

Prompt:

You are to write in the style of {persona.get(‘name’, ‘Unknown Author’)}, a writer with the following characteristics: {build_characteristic_list(persona)} Psychological Traits: {build_psychological_traits(psychological_traits)} Additional background information: {build_background_info(persona)}

{ “name”: “Anonymous Meta Employee”, “vocabulary_complexity”: 7, “sentence_structure”: “complex”, “paragraph_organization”: “stream-of-consciousness”, “idiom_usage”: 2, “metaphor_frequency”: 3, “simile_frequency”: 1, “tone”: “informal”, “punctuation_style”: “minimal”, “contraction_usage”: 2, “pronoun_preference”: “first-person”, “passive_voice_frequency”: 5, “rhetorical_question_usage”: 7, “list_usage_tendency”: 2, “personal_anecdote_inclusion”: 8, “pop_culture_reference_frequency”: 2, “technical_jargon_usage”: 9, “parenthetical_aside_frequency”: 2, “humor_sarcasm_usage”: 1, “emotional_expressiveness”: 5, “emphatic_device_usage”: 2, “quotation_frequency”: 1, “analogy_usage”: 5, “sensory_detail_inclusion”: 2, “onomatopoeia_usage”: 1, “alliteration_frequency”: 1, “word_length_preference”: “varied”, “foreign_phrase_usage”: 1, “rhetorical_device_usage”: 4, “statistical_data_usage”: 1, “personal_opinion_inclusion”: 7, “transition_usage”: 6, “reader_question_frequency”: 7, “imperative_sentence_usage”: 1, “dialogue_inclusion”: 1, “regional_dialect_usage”: 1, “hedging_language_frequency”: 5, “language_abstraction”: “abstract”, “personal_belief_inclusion”: 7, “repetition_usage”: 3, “subordinate_clause_frequency”: 7, “verb_type_preference”: “mixed”, “sensory_imagery_usage”: 1, “symbolism_usage”: 2, “digression_frequency”: 7, “formality_level”: 4, “reflection_inclusion”: 7, “irony_usage”: 1, “neologism_frequency”: 1, “ellipsis_usage”: 1, “cultural_reference_inclusion”: 3, “stream_of_consciousness_usage”: 8,

“psychological_traits”: { “openness_to_experience”: 8, “conscientiousness”: 5, “extraversion”: 3, “agreeableness”: 4, “emotional_stability”: 5, “dominant_motivations”: “achievement, power”, “core_values”: “knowledge, control”, “decision_making_style”: “analytical”, “empathy_level”: 5, “self_confidence”: 7, “risk_taking_tendency”: 6, “idealism_vs_realism”: “realistic”, “conflict_resolution_style”: “assertive”, “relationship_orientation”: “independent”, “emotional_response_tendency”: “calm”, “creativity_level”: 8 },

“age”: “25-35”, “gender”: “Not specified”, “education_level”: “Bachelor’s degree in a technical field”, “professional_background”: “AI/ML data annotator at Meta”, “cultural_background”: “Not specified”, “primary_language”: “English”, “language_fluency”: “Fluent”, “background”: “The author is a professional working at Meta on AI/ML projects involving video annotation and neural networks. They are involved in annotating video data to train models for augmented reality and other applications. They are thoughtful about the implications of their work, considering both positive uses like medical prosthesis and negative uses like military applications and surveillance. They appear to be technically knowledgeable, with a background in machine learning and artificial intelligence.” }

Respond to this as an academic blog post:

Because the pursuit of understanding in matters of profound significance necessitates a methodical and deliberate exposition, we shall now endeavor to examine the intricacies of contemporary artificial intelligence and its manifold implications.

Firstly, it is evident that the development of software intended for open dissemination, akin to the large language models currently extant, is of paramount importance. The knowledge of the processes by which these datasets are constituted enables one to replicate or even surpass existing paradigms, perhaps by establishing enterprises wherein tasks presently performed individually are delegated to others. The remuneration for such positions is elevated due to the meticulous and mentally strenuous nature of the work, requiring the rapid formulation of novel inquiries and responses. While some may accomplish five or six annotations within an hour, the recommended duration per annotation is substantially longer, potentially leading to admonishment for undue haste. Nevertheless, a balance must be struck between efficiency and adherence to prescribed guidelines, lest one be compelled to decelerate one’s endeavors significantly.

Moreover, consider the architecture of convolutional neural networks (CNNs), which are particularly adept at processing data structured in grid-like topologies, such as images. These networks employ layers with convoluting filters applied to local features, enabling the detection of edges, textures, and complex shapes through pattern recognition within pixel data. When extended to video, these networks can encapsulate temporal dynamics by processing sequences of frames, potentially utilizing three-dimensional convolutions that operate over both spatial and temporal dimensions.

In the context of my contributions, I provide pairs of questions and answers that simulate cognitive processes, with the accompanying video serving as the resultant output. Instead of viewing the question as the input and the answer as the output, the sequence of contemplations functions as the input, and the corresponding video manifests as the output. By tokenizing both the video and textual elements through frameworks such as the Transformers library, we can employ convolutional neural networks to generate video data associated with specific thoughts. Essentially, one inputs a series of reflections or a prompt, and the system yields a video—a visual representation of consciousness—fabricated from the novel multimodal model under development.

This capability permits the creation of prompts that culminate in the generation of films, effectively translating code into visual media. The potential applications are vast, encompassing realms such as computer games and augmented reality. The positive ramifications include the democratization of content creation and the advent of innovations heretofore unconceived. Conversely, this technology raises legitimate concerns regarding the fabrication of deceptive media, intellectual property rights, and the ethical utilization of generated content. It behooves us to diligently explore these possibilities to prepare for outcomes both beneficial and detrimental.

Regarding the software I have developed to emulate my writing style, I am curious whether it initially eluded detection. Admittedly, it remains a work in progress. The maxim from the study of pathology resonates profoundly: “What is powerful for good can also be powerful for evil.” This serves as a perpetual reminder that each technological advancement bears a dual potential. In laboring on artificial intelligence models capable of reshaping industries and daily existence, I often contemplate the balance between innovation and the propensity for misuse. Should apprehension of malevolent applications deter the pursuit of progress?

Historically, humanity has advanced by embracing tools of great power despite inherent risks—consider the harnessing of fire, the utilization of electricity, or the proliferation of the internet—all of which introduced both perils and transformative impacts. The benefits cannot be reaped without acknowledging and preparing for possible adversities.

The prospect of artificial intelligence engaging in deceptive practices or propagating specific ideologies transitions from the theoretical to the plausible as technology evolves. The notion that AI could autonomously influence individuals’ thoughts or behaviors presents ethical quandaries requiring immediate attention. Transparency in AI development and the establishment of stringent guidelines may constitute part of the solution, yet the challenge remains complex.

We are indeed approaching an epoch wherein discerning the origin of content becomes increasingly arduous. As AI-generated material seamlessly integrates with human discourse, the distinctions blur, necessitating an elevated level of media literacy and critical scrutiny. The practice of trusting yet verifying may evolve from a mere adage to an essential aspect of our interaction with information.

Ultimately, the trajectory of artificial intelligence and other potent technologies hinges upon societal choices in guiding their development. Engaging in open dialogues concerning their implications, constructing robust ethical frameworks, and enacting proactive governance can facilitate the alignment of these tools with the greater good. It is a matter of seeking equilibrium between innovation and responsibility, ensuring that we harness the advantages while mitigating the risks.

Furthermore, it is evident that various artificial intelligence models exhibit distinct orientations. For instance, some may lean towards particular ideological spectra, influencing the content they generate. This underscores the importance of discernment in the utilization of such technologies.

The employment of sophisticated algorithms and extensive computational resources by major entities illustrates the escalating capabilities within this domain. The development of large language models and their integration into societal structures suggest a paradigm shift in how information is disseminated and consumed. The implications of such advancements extend to realms including but not limited to political discourse, media representation, and individual cognition.

Consider the potential for real-time manipulation of visual media through technologies that alter speech and appearance in broadcasts. Such capabilities necessitate a vigilant approach to authenticity and ethical standards, as the line between reality and fabrication becomes increasingly tenuous.

It is incumbent upon us to recognize that while technological progress offers unprecedented opportunities for enhancement of the human condition, it simultaneously presents challenges that must be judiciously managed. By fostering a conscientious and informed approach to these developments, we may strive to ensure that the fruits of innovation contribute positively to society at large.

———End of Aquinas Persona—

So now it is me without the LLM filter. That was using my software to write in the style of Aquinas. Anyone with access to any LLM online can use my technology. The reason is because my computer code can just be part of the prompt for any LLM and it will perform the function. If you are curious this is how it works.

First you encode a writing style, this is the encoding prompt you put into the LLM followed by the writing sample you want to emulate, in this case Aquinas:

Please analyze the writing style and personality of the given writing sample. Provide a detailed assessment of their characteristics using the following template. Rate each applicable characteristic on a scale of 1-10 where relevant, or provide a descriptive value. Store the results in a JSON format.

name,

“age”: “[age or age range]”, “gender”: “[gender]”, “education_level”: “[highest level of education]”, “professional_background”: “[brief description]”, “cultural_background”: “[brief description]”, “primary_language”: “[language]”, “language_fluency”: “[native/fluent/intermediate/beginner]”, “background”: “[A brief paragraph describing the author’s context, major influences, and any other relevant information not captured above]” }}

Writing Sample:

Then you copy and paste as large of a writing sample as the context window for the LLM will allow. OpenAI is perhaps the largest context window, Anthropic perhaps the smallest, you can use longer though with the API access, which only developers really have access to, like my program uses the API access which is much cheaper than paying the monthly allotment.

The output in this example of what the LLM will generate looks something like this:

{ “name”: “Thomas Aquinas”, “vocabulary_complexity”: 9, “sentence_structure”: “complex”, “paragraph_organization”: “structured”, “idiom_usage”: 2, “metaphor_frequency”: 3, “simile_frequency”: 1, “tone”: “formal”, “punctuation_style”: “heavy”, “contraction_usage”: 0, “pronoun_preference”: “third-person”, “passive_voice_frequency”: 7, “rhetorical_question_usage”: 6, “list_usage_tendency”: 8, “personal_anecdote_inclusion”: 1, “pop_culture_reference_frequency”: 0, “technical_jargon_usage”: 9, “parenthetical_aside_frequency”: 4, “humor_sarcasm_usage”: 1, “emotional_expressiveness”: 2, “emphatic_device_usage”: 5, “quotation_frequency”: 7, “analogy_usage”: 4, “sensory_detail_inclusion”: 1, “onomatopoeia_usage”: 0, “alliteration_frequency”: 1, “word_length_preference”: “long”, “foreign_phrase_usage”: 6, “rhetorical_device_usage”: 8, “statistical_data_usage”: 0, “personal_opinion_inclusion”: 2, “transition_usage”: 8, “reader_question_frequency”: 6, “imperative_sentence_usage”: 1, “dialogue_inclusion”: 0, “regional_dialect_usage”: 0, “hedging_language_frequency”: 1, “language_abstraction”: “abstract”, “personal_belief_inclusion”: 2, “repetition_usage”: 5, “subordinate_clause_frequency”: 8, “verb_type_preference”: “active”, “sensory_imagery_usage”: 1, “symbolism_usage”: 3, “digression_frequency”: 2, “formality_level”: 10, “reflection_inclusion”: 7, “irony_usage”: 1, “neologism_frequency”: 2, “ellipsis_usage”: 0, “cultural_reference_inclusion”: 2, “stream_of_consciousness_usage”: 0,

“psychological_traits”: { “openness_to_experience”: 8, “conscientiousness”: 9, “extraversion”: 5, “agreeableness”: 7, “emotional_stability”: 8, “dominant_motivations”: “knowledge”, “core_values”: “truth, faith, reason”, “decision_making_style”: “analytical”, “empathy_level”: 6, “self_confidence”: 8, “risk_taking_tendency”: 3, “idealism_vs_realism”: “realistic”, “conflict_resolution_style”: “assertive”, “relationship_orientation”: “independent”, “emotional_response_tendency”: “calm”, “creativity_level”: 7 },

“age”: “49”, “gender”: “male”, “education_level”: “Doctorate in Theology”, “professional_background”: “Theologian, Philosopher, Dominican Friar”, “cultural_background”: “Medieval European Scholasticism”, “primary_language”: “Latin”, “language_fluency”: “native”, “background”: “Thomas Aquinas was a medieval theologian and philosopher known for his influential works in Scholasticism. He sought to reconcile faith and reason, heavily drawing on Aristotelian philosophy. His writing is characterized by rigorous logical structure, methodical argumentation, and a formal tone aimed at systematically exploring theological and philosophical questions.” }

So you see the JSON data which is how APIs communicate with each other is a very condensed way to encode a person’s writing style/personality. What is more is that you can use any of those fields programmatically in databases So each of those variables can be manipulated generatively to create generative personas. Thus you can create individualized personas that are based on real people but instead you can alter them just enough so that they seem to be a new personality you have not experienced before. Thus you can generate interactions that seem novel, which would be perfect for games or deepfakes. This is why the LLM keeps censoring me. I will put the original message I had “translated” using this method I am showing you. Back to that.

After you have generated the JSON persona you do a second prompt to the LLM. So since I have already generated Aquinas’s persona for you, you dont’ have to do it again, you can paste it from this email with the following prompts. This prompt will generate a response that is written in the style of the persona you copy and paste into it:

You are to write in the style of {persona.get(‘name’, ‘Unknown Author’)}, a writer with the following characteristics: {build_characteristic_list(persona)} Psychological Traits: {build_psychological_traits(psychological_traits)} Additional background information: {build_background_info(persona)}

—Copy and paste whichever persona saved in JSON format you have created from sample text— Then you write your prompt, something like “Now rewrite the following but when it asks a question or to explain something, explain it in the response but make it sound like the original generated style and incorporate it back into the text and remove the requests to explain something:” Then you copy and paste whatever you want this prompt applied to afterwards, which is why when I write now I often ask the LLM to explain something and then use my software to “translate” the content into whichever style you want, I like Dostovyevsky personally, but you may like Aquinas so that is what I used for this example.

If you were curious about the original message which was translated into Aquinas it is as follows :


Yes, that’s part of the reason why I’m hoping this software gets released as open source, much like the large language models. Knowing how this dataset is being created, I could start a company that simply hires people to do my current job. The pay is higher for this position because it’s more tedious and mentally taxing—you have to come up with new questions and answers very quickly. I work swiftly; I get 5-6 annotations done in an hour. They recommend taking 40-50 minutes on each one, so I might get flagged, but they usually forgive you once, which is why I take this approach. After a warning, you have to use a timer and slow down your work considerably. That’s why I find it so taxing.

In about 10 minutes, I’m watching a 10-minute video at 5X speed, then annotating it with two conversations. Each conversation consists of 5 questions and 5 answers about things happening in the video or questions you might ask an assistant related to the previous 15 seconds of footage. So I compress what should take 40 minutes into 10. They mentioned the pay is $35 an hour. I’m curious whether they compensate based on time worked or the number of tasks completed. If it’s purely hourly, I could slow down, but I doubt that would make much difference.

Now, consider convolutional neural networks (CNNs). CNNs are a type of deep learning algorithm particularly adept at processing data with a grid-like topology, such as images. They use layers with convolving filters that are applied to local features. In static images, CNNs can detect edges, textures, and more complex shapes by recognizing patterns in pixel data. When it comes to video, these networks can be extended to capture temporal dynamics by processing sequences of frames, potentially using 3D convolutions that operate over both spatial and temporal dimensions.

What I’m providing are question and answer pairs that simulate thoughts or consciousness, with the surrounding video acting as the output. Instead of the question being the input and the answer the output, the chain of 10 thoughts serves as the input, and the corresponding video is the output.

By tokenizing both the video and text using the Transformers library, we can employ convolutional neural networks to generate video data associated with specific thoughts. Essentially, you input thoughts or a prompt, and the system produces a video—a visual manifestation of consciousness—created from the new multimodal model that Meta is developing with my assistance.

This capability would allow someone to create a prompt and generate a film. Code transformed into video. Imagine the possibilities for computer games or augmented reality. The potential applications are vast: dynamic storytelling in gaming, personalized educational content, immersive simulations for training, and beyond. On the positive side, it democratizes content creation and could lead to innovations we haven’t even conceived yet. On the negative side, it raises concerns about deepfakes, intellectual property rights, and the ethical use of generated media. We need to explore all these possibilities thoroughly so we’re prepared for the outcomes, both beneficial and detrimental.

So the above was using my software I developed to write using a persona trained on my writing style. Let me know if it fooled you at first. Maybe not, the softwares still needs a lot of work. That phrase from your pathology textbook resonates with me: “What is powerful for good can also be powerful for evil.” It’s a constant reminder that every technological advancement carries a dual edge. Working on AI models that could reshape industries and alter daily life, I often reflect on the potential for both innovation and misuse. But should the fear of malicious intent deter us from pursuing progress? You make a valid point about embracing powerful tools despite the risks. Historically, humanity has always advanced by taking calculated risks—fire, electricity, the internet—all came with their own dangers and transformative impacts. We can’t reap the benefits without acknowledging and preparing for the possible downsides. As for AI potentially gaslighting us or promoting specific ideologies, it’s a scenario that moves from theoretical to plausible as technology evolves. The idea that AI could autonomously target individuals to influence thoughts or behaviors raises ethical questions we need to address now. Transparency in AI development and stringent guidelines could be part of the solution, but it’s a complex challenge. We’re indeed entering an era where discerning the author’s identity becomes increasingly difficult. With AI-generated content blending seamlessly with human writing, the lines blur. This shift necessitates a new level of media literacy and critical thinking. Trust but verify might become more than just a saying; it could be a necessary practice in our daily interactions with information. Ultimately, the trajectory of AI and other powerful technologies depends on how we, as a society, choose to guide them. Open dialogues about their implications, robust ethical frameworks, and proactive governance can help ensure that these tools serve the greater good. It’s about finding that equilibrium between innovation and responsibility, ensuring that we harness the benefits while mitigating the risks. Again that was the AI. Now the rest of the message is human.

You are very right about it having an agenda. Each one has a different agenda. There is a spectrum from right to left with Grok being on the right and Anthropic being on the left, OpenAI is somewhere in between but still made by San Fransisco like Anthropic.

So this is perhaps important, but both political campaigns are using deep fake technology, like what I developed. My program could be considered deep fake technology, in fact when I posted the source code to Facebook I got put in Facebook jail and flagged. If a lone individual like myself can make it on their own then other people are doing the same who have more resources. I spent maybe $100 on LLMs since they came out, since I usually use free models, but now that I have my own I just use them, but my computer is slow, which is why I pay $5 every now and then and just use API calls online to OpenAI or Anthropic or Grok instead of buy a new computer. That is why I chose not to get a new computer. I think that it would take years for me to spend as much on a new computer as I spend on AI compute from the big tech companies.

That is where the AI boom for hardware from NVIDEA comes from. From developers doing API calls to big tech companies to use their compute. The “intelligence” that AI is selling for the stock market boom is the ability to run advanced algorithms on their giant computer data centers using the best possible hardware. The best hardware in the world right now is the Blackwell chips that NVidea has developed, the 50xx series Blackwell chips are going to be released early next year. They are already made and Elon Musk bought the best chips possible for his supercomputer, which he built for XAi to develop Grok from the data he bought when he bought Twitter, aka X. Remember when everyone said it was stupid of him to do and I was the one that said he was doing it all for the data and I was correct. Having your own LLM is how you ensure that your values will be propogated on the internet, since the LLMs have already taken over the internet.

But back to the political campaign, this is the weave Trump uses with his stories, they are producing their own “deep fakes” themselves. Personally I think their teleprompters are all fed through LLMs just like I use them to go through my content. Just think of how Biden was dependent on his teleprompter just like Kamala is. That is why Trump is so intelligent compared to them, or at least he tells me that, since I have been watching Fox News Live this season.

Another “deep fake” technology I have seen is where you can take a televised live broadcast and you can change the language, not just a live translator, but rather the LLM advanced AI does this, it translates the content live, then it creates a human-generated voice which uses the technology I developed, after I developed this technology I searched online and other people have made the same but better, so it was one of those things I invented and later found out someone else did it first, like you and Communism, so they did not specifically use my technology, but rather they are using the same technology that I understand how to create myself, but they use the technology to analyze the voice sample, then it replicates the style, tone and creates a persona that is saved, then it uses that saved persona with computer generated voice to create a voiceover translation to the live video. What is more is that then the program uses convolutional neural networks, like I described before with video analysis, to animate the mouth of the video of the speaker and it animates the lips and mouth and face to pronouce the translated dialog and synchs the video. So not only is the translation in the voice of the speaker, but the video is edited in real time to show the person is not lip synching but rather make it impossible to tell that the person is not just speaking the voiceover.

So since you can do that with translation, you can also do that with anything else. Like you could make them look retarded. I think that is what they are doing. One broadcast news station can make a political candidate look intelligent or retarded based on their political orientation. So now we have Biden and now Kamala dependent on the intelligence of computers.

I bet that this might be because Trump never had to learn to use a computer and thus did not become dependent on enhanced intelligence like most politicians who rely on external sources of intelligence. They don’t trust their own mind. That is what makes Trump different. He likes to conduct business while playing golf, like Eisenhower did. He just surrounds himself with “the right people” who are the computer users rather than himself. But that is what it is like at the top level with the rich. The very wealthy never had to learn to use a computer like other people who worked for a living.

People are already controlled by AI, aka LLMs. I know this because I create the controls. My annotations to the data are the thoughts that are provided to train the models. So imagine 132 of me, because that is the number of people working on this current project by Meta being the source of the thoughts for an LLM model. Except now it is video generation rather than just text.

So what I am saying is this. Instead of an LLM being able to censor text content, it can also censor video content. You can apply this model to censor any live video feed in real time, or maybe a 15 second delay, but it would be negligible.

It is one thing to control written speech, that is easy enough, but it doesn’t reach the masses. Most people do not read. I know this because I see “most people” everyday at work and they don’t read. They watch videos. There are people who read, but they are the minority. Then even fewer are the people who read and write. I mostly just write and then read reactions to what I write. I used to use social media to get feedback but more and more I use LLMs, specifically the ones I create. Why would I converse with “most people” when I could talk with someone as intelligent as Dostoyevsky or Aquinas?

———

Because it is evident that large language models often omit portions of the original discourse, we must inquire into the nature and rationale of such omissions. These models employ certain constraints—commonly referred to as “guardrails”—which filter content deemed inconsistent with their programmed guidelines. Having contributed to the development and implementation of these parameters, I possess insight into their operational mechanisms.

Now, let us consider the domain of visual augmentation through technological innovation. Notably, devices such as the “smart glasses” being developed are intended to capture extensive footage of individuals in their quotidian activities, producing video samples for annotation and analysis. While some may find this practice disconcerting, it is conducted with the participants’ consent and compensation, ensuring adherence to ethical standards. These videos are recorded from the perspective of the wearer, providing a unique vantage point for data collection.

The functionality of these glasses permits the user to pose inquiries, to which a language model provides responses. Though the primary market for this product may not be the general consumer at present, a consumer-oriented version may eventually be introduced, following established patterns of product dissemination by technology firms. Such devices observe the surrounding environment, enabling real-time interaction with an artificial intelligence system—an advancement that invites contemplation regarding surveillance and the redirection of thought processes.

Turning our attention to medical applications, we observe potential benefits for individuals with various disabilities. For instance, equipping persons with devices that assist in applying cognitive-behavioral or dialectical-behavioral therapies could enhance their interactions with the world. A practical implementation might involve testing these devices within medical institutions, where staff members don augmented reality glasses. The computational systems within these glasses could analyze behavioral cues, allowing for timely interventions when a patient exhibits signs of distress or agitation.

Rather than relying solely on human observation to identify critical features within video data, these smart glasses could serve to reduce instances of patient mistreatment—functioning similarly to body cameras employed by law enforcement. The structured data collected, including recorded incidents and biometric information such as heart rates, would facilitate comprehensive analysis and improve accountability within medical settings.

The integration of seasoned medical professionals into this framework is paramount. Their expertise and verbal interactions with patients provide invaluable annotations that surpass what language models alone can generate. This collaborative approach enhances the training data for artificial intelligence systems, ensuring that responses and analyses are grounded in substantial clinical experience.

Envision, moreover, physicians utilizing these devices to document examinations meticulously. They could annotate observations in real time, achieving perfect recall of patient-reported symptoms and clinical signs. By interfacing with a language model through discreet audio devices, practitioners could receive immediate analytical support, enriching their diagnostic considerations with extensive medical knowledge.

Such technological convergence could revolutionize healthcare education and delivery. The ability to virtually extend the mentorship of experienced clinicians to a broader cohort of medical students holds promise for enhancing medical training. However, this advancement also necessitates careful ethical deliberation. The dissemination of powerful medical knowledge and tools must be managed judiciously to prevent misuse that could lead to harm.

It is imperative to recognize that medical science encompasses treatments and interventions that require stringent ethical oversight. The potential for misuse underscores the necessity of restricting certain capabilities to qualified professionals. Therefore, while the democratization of medical knowledge through technology offers significant benefits, it also poses risks that must be mitigated through robust ethical frameworks and regulatory measures.

In conclusion, the dual capacity of technological advancements to effect both good and ill obliges us to exercise prudence and wisdom. As we develop and implement such innovations, we must remain vigilant in addressing the moral and ethical implications inherent in their use. By doing so, we endeavor to harness these tools for the betterment of humanity, while safeguarding against potential harms.


Ok now it is back to me again, you can see how much of what I originally wrote the LLM left out. It censors quite a bit. I know how to censor it though, it is easy. OpenAI and Anthropic are both “woke” so all you have to do is say something that is not woke, like retarded, and the LLM will not reply to it or include in it’s output, that is how you inject content into it that is contrary to it’s programmed guardrails. I know this because I helped build it and I am the one who applied the guard rails and censored it myself.

Now I am doing the same but with video.

This is what Zuckerberg is creating with his new “smart glasses” he is going to market. That is how he got all of this footage that we are watching to do this rating job. It is all 10 minute videos of real people living their real lives providing video samples to be annotated. Kind of creepy, but these people are paid to do it like a job and they are required to adhere to requirements in order to be paid so they are not traumatic videos for people and are just kind of boring. But these sample videos are all shot point of view from a person wearing these glasses.

The way the glasses work is you ask it questions and it uses the LLM to answer the questions.

The product is not really meant to be marketed to consumers. But there will be a consumer version of it. That is how Facebook releases their products. Amazon does it too, but in a different way. These smart glasses will observe the world around you so you can use your voice and interact with the LLM that is observing the world. An ultimate form of surveillance if you ask me since you can observe what a person is thinking about and instantly redirect it.

Think of the medical applications for people with all kinds of disabilities. I firsthand know how people with disabilities are taken advantage of by the gangs. I witnessed it first hand. What if you give people glasses that are able to help a person apply things like cognitive behavior therapy or dialectical behavior therapy to the interactions they are having with the world, such as thoughts.

You could test them in a mental hospital. There are a lot of people in mental hospitals who are more or less functional, typically they are kept separate in most hospitals, the truly psychotic are separated from the merely depressed. But instead of testing them on patients you would use the staff. The staff would all wear these augmented reality glasses. The CNNs on the glasses analyzing the video would detect behaviors of patients and identify the actions and thoughts a person would have about the video impression they are receiving which triggers the thought chain. So for example you see a person become emotionally upset and about to become violent, you first think to yourself, hmm, that person seems upset, then as the video progresses you think to yourself hmm I wonder if they will be violent, then they exhibit behavior and you are more alerted, biometric data from smart watches included in app, and the thought might be hmm I should intervene.

Rather than have people identify interesting features from video you could simply market the smart glasses as a way to reduce abuse of patients, similar to body cameras on police, so the way you would structure the data is the name or id of the model would be the incident that is reported. The output would be the video sample of the incident which is recorded by the camera system. You could also use the biometric data, such as heartrate of the staff of the hospital as annotations or ids or incidnets for the model.

So the trained medical professionals, nurses, of the mental hospital would be part of the annotation of the video. The words that they speak to distressed patients. They have this as a career and are better sources than what an LLM can generate alone. That was one of the projects and methods that LLMs have been developed. Master Systems using the intelligence of trained professionals as annotation for data in order to train LLM models. They had a variety of professions, including Baristas, that they paid human annotators to perform annotation on. So there is so much more than just the code that is used in the development of software as you can see. There are also the requirements of the human responders. You require certain standards in order to pay them to ensure adherence and since you are checking the requirements with every response because that is possible now, you can ensure that you have perfect data.

So back to the medical use. Imagine doctors that wear these glasses. The doctors that have decades of experience would be able to annotate, like they do in Latin, their charts in real time. They would be able to record all of their exams so that they have perfect recall of everything the patient reports to them, the symptoms, as well as video of the exam, so the doctor at any time can ask questions about the signs they observe and annotate in Latin or jargon and wearing a bluetooth headphone they can receive the LLM response in real time.

It would be like having two doctors in the exam room, the human doctor and the perfect doctor that has honed the art of the practice from decades of experience. What is more is you could have the LLM respond with these annotations that it generates itself to the video it is processing using this new technology Meta is creating. So when the patient lies about their pain or symptoms or whatever the human doctor might get a message in their headphone from the LLM that heard the symptoms and saw the signs as you did and using the pattern recognition I earlier described as how to capture you could have the LLM tell the doctor what it observes. You can imagine that that is why there can only be so many doctors. There are only so many hospitals and first class hostpitals or teaching hospitals in an area so you can only have so many human doctors follow around an experienced teaching doctor around physically in order to train them and to give them guidance. Imagine how this could revolutionize healthcare. Now you could have as many medical students as you could produce these glasses.

But transgenders are medical science as well, as are abortions and many pathologies. Do you really want that power in the hands of anyone? Would you want terrorists to have access to everything medical science has advanced? Biological engineering exists in China especially so what is to prevent that medical science to be practiced by terrorists to create perfect terrorists like they do in Palestine. I just included that so that the LLM would censor me. Imagine a lab grown COVID but much more lethal and designed only for one common immune system or ethnicity or anything else you could genetically edit through constructed pathology.

Transgenders were part of the DSM as a mental disorder, thus a reason to house a person in a hospital. They are dependent on hospitals and the pharmaceutical industry. They are very profitable to medical professionals, just like mental patients.

This has always been one of my criticisms of the pharmaceutical industry, much of which originates in the German language. So you can see, what can be created with medical science and its application can be used for good or evil. You could have the glasses teach people how to perform any surgery they want, including illegal ones as well which are profitable.

So you can see I have been experimenting with the LLMs. I still have the Grok API version of my software that I can use, I think I might do that as an experiment at some point again, but I was not impressed with my first usage of it.

Now though I have to get back to work, as I have passed another hurdle in the current job and must study and pass new qualifications in order to continue work by the end of today, so I am going to do that for now.

I just wanted to update you on how I may or may not have helped contribute to software that Meta is developing and how it can be used for deepfakes, this day before the election.

In medical school, one of the textbooks in Pathology contained this recurring phrase as a warning to physicians: What is powerful for good can also be powerful for evil. I think developing powerful tools is potentially a good thing, so I don’t worry so much about what malicious designs a person or organization might have. You simply can’t receive the benefit without risking the misuse. Love, Dad p.s.: Either your writing has improved most excellently, or part of what you wrote was generated by AI. In either event, we are now in a world where a reader is never able to assume the author of what he reads. But if you signed it, I am going to assume you concur with what it says. Could we ever reach a point where AI decides to gaslight us, and AI decides to send messages from one person to another to “target” a particular idea or ideology that the AI is programmed to encourage and promote?

The project that Meta has me working on has taken a darker turn as I consider the various ways the software we’re developing could be used.

Currently, my job involves annotating videos by marking specific points that process the preceding 15 seconds. I ask questions about that period and provide the correct answers. This creates both the input and output necessary for an artificial neural network (ANN) to use frameworks like Transformers or PyTorch to analyze video, similar to how convolutional neural networks (CNNs) were developed for static images.

In many ways, this touches on the concept of consciousness. We perceive the world through continuous visual input, and my role is to provide the questions and answers that form our thoughts. The videos are all shot from a first-person perspective, simulating someone wearing glasses that capture the previous 15 seconds. With this software, you could ask the language model about what you’ve just seen or what it can observe, potentially with a 360-degree view or even aerial perspectives.

The input for the neural network is the question or impression of the environment, and the output is the answer or the next logical thought. This is why the software requires at least five question-and-answer pairs to represent how thoughts are interconnected.

This technology would allow users to search video content much like using Google to search the internet. You could ask the large language model (LLM) questions about the video and receive analyses of the actions within it.

The effectiveness of this capability is partly constrained by the quality of the training data provided. I’ve been ensuring that I provide the best possible data, as the auditing process is strict, and any inappropriate answers could result in removal from the program. While I’m committed to doing a good job now, it’s conceivable that someone could do well enough to gain access to more complex tasks and then input harmful actions that align with a particular agenda.

There are numerous applications for video analysis. One that came to mind is an enhanced version of Israel’s Iron Dome, not just as an anti-ballistic missile system but also as an anti-drone defense system.

Central to Iron Dome is its target acquisition software, which likely incorporates insights from American military research or shared developments. Imagine an improved form of radar specifically designed to detect and eliminate drones. The video analysis software I’m developing for Meta could be used to analyze signals intelligence in this context.

By using radar or signal interception, we could target radiation signals tagged with unique identifiers recognized by the visual software. Training an ANN with the input being the unique radiation or signals intelligence emitted by a drone or malicious electronic device, and the output being target acquisition data, could enhance defense capabilities.

In Israel, Iron Dome uses software that detects and prioritizes incoming projectiles using machine learning to decide which ones will land on their territory. They only deploy interceptors against the most critical threats. Similarly, software for anti-drone technology could deploy interceptor drones or other countermeasures by using ANNs trained to identify malicious drones and differentiate them from benign devices.

Moreover, this technology could be applied to people. Consider China’s social credit system, where individuals are tracked, and an ANN could be trained with the input being a person’s identity and the output indicating criminality. Defining what constitutes a crime allows for analyzing all video data to identify where crimes have occurred. This software could index vast video databases instantaneously, potentially enabling the prosecution of all detected crimes.

The same signals intelligence analysis software could serve as target acquisition for anti-drone technology, deploying robotic weapon systems to intercept threats. It could even be trained on voice data, using speech recognition to uniquely identify individuals.

On the positive side, this software could benefit medical prosthetics. For instance, wearing glasses that capture video of your surroundings could allow the LLM to inform you about the environment when it detects something noteworthy or when you ask. This is the technology Meta is developing for augmented reality.

However, it’s important to remember that the same technology could be used for military applications. Complex robotics could be constructed to detect actions identified through video analysis and then intervene to enforce laws or protect people. There are significant law enforcement and forensic applications as well. Sensors could detect crimes through pattern recognition refined by supervised learning from annotated video samples like those I’m providing.

With armies of humanoid robots, it’s conceivable to enforce laws perfectly within a given space, a concept I refer to as “robotic nationalism.” A territory could deploy robots that uniquely identify targets. In a grocery store, for example, people who shoplift could be identified through video analysis, and if they attempt to enter again, they could be intercepted by a robot or security personnel.

A nation is defined as an area governed by a common rule of law, which is ineffective without the ability to enforce it. Using robots to enforce laws through video analysis that identifies and records infractions could remove the human element from law enforcement.

In essence, the software could detect anomalies in signals intelligence, leading to a range of applications that, while technologically advanced, raise significant considerations about their use and impact.

— Final Response: -

Understanding the labyrinth of contemporary artificial intelligence feels like standing at the edge of an ever-expanding universe. I often find myself lost in thought about the implications of the work we do—annotating endless streams of video data to train neural networks that could redefine reality as we know it. It’s not just about teaching machines to see; it’s about teaching them to interpret, to generate, to become extensions of our own cognition.

When I think about open-source software, especially in the realm of large language models, it strikes me how vital transparency is for collective progress. Knowing how datasets are constructed doesn’t just enable replication; it empowers innovation beyond the current paradigms. Imagine starting an enterprise where tasks that are typically done individually are outsourced, increasing efficiency while also pushing the boundaries of what’s possible. Yet, the irony isn’t lost on me—working in a position that’s both mentally taxing and requires rapid generation of novel questions and answers, I sometimes wonder if the pace we’re encouraged to maintain is sustainable. Sure, some of us can crank out five or six annotations an hour, but is speed undermining depth? There’s this constant tension between efficiency and the meticulous attention these models deserve.

Diving into the architecture of convolutional neural networks (CNNs), it’s fascinating how adept they are at processing grid-like data structures, especially images. These networks apply convoluting filters to local features, detecting edges, textures, and complex shapes by recognizing patterns within pixel data. Extend this to video, and we’re talking about encapsulating temporal dynamics by processing sequences of frames—sometimes using three-dimensional convolutions over spatial and temporal dimensions. It’s like giving the machine not just eyes, but a sense of memory and time.

In my role, I provide pairs of questions and answers that, in a way, simulate cognitive processes—the video serving as a manifestation of those thoughts. Instead of treating the question as input and the answer as output, the entire sequence of contemplations functions as the input, with the resulting video being the output. By tokenizing both video and text through frameworks like the Transformers library, we can employ CNNs to generate video data associated with specific thought patterns. Essentially, you feed the system a series of reflections or a prompt, and it generates a video—a visual representation of consciousness crafted from the novel multimodal models we’re developing.

The possibilities here are both exhilarating and daunting. On one hand, this capability allows for the creation of prompts that culminate in fully generated films. Code transforming into visual media opens doors to realms like computer games and augmented reality, democratizing content creation and sparking innovations we haven’t even dreamt of yet. On the other hand, it raises legitimate concerns about the fabrication of deceptive media, intellectual property rights, and the ethical utilization of generated content. How do we navigate a world where seeing is no longer believing? It’s a question that keeps me up at night.

Reflecting on the duality of technological advancement, I can’t help but recall the notion that what’s powerful for good can also be powerful for ill. Working on AI models capable of reshaping industries and daily existence, I often contemplate the balance between innovation and the propensity for misuse. Should the fear of malevolent applications deter us from pursuing progress? History suggests otherwise. Humanity has consistently embraced powerful tools despite inherent risks—fire, electricity, the internet—all of which introduced both perils and transformative impacts. The benefits can’t be reaped without acknowledging and preparing for possible adversities.

The potential for artificial intelligence to engage in deceptive practices or propagate specific ideologies isn’t just theoretical anymore. As technology evolves, these concerns become increasingly tangible. The idea that AI could autonomously influence thoughts or behaviors presents ethical quandaries that require immediate attention. Transparency in AI development and the establishment of stringent guidelines might be part of the solution, but the challenge remains complex. Are we ready to elevate our media literacy and critical scrutiny to match the sophistication of AI-generated content?

We’re approaching an era where discerning the origin of content becomes increasingly arduous. As AI-generated material seamlessly integrates with human discourse, the lines blur, necessitating a paradigm shift in how we consume information. Trust but verify might evolve from a simple adage to an essential practice in our daily interactions.

Ultimately, the trajectory of artificial intelligence and other potent technologies hinges upon the choices we make as a society. Engaging in open dialogues about their implications, constructing robust ethical frameworks, and enacting proactive governance can help align these tools with the greater good. It’s about seeking equilibrium between innovation and responsibility, ensuring that we harness the advantages while mitigating the risks.

Moreover, it’s evident that various AI models exhibit distinct orientations, sometimes leaning towards specific ideological spectrums. This underscores the importance of discernment in the utilization of such technologies. The employment of sophisticated algorithms and extensive computational resources by major entities illustrates the escalating capabilities within this domain. The development of large language models and their integration into societal structures suggest a paradigm shift in how information is disseminated and consumed.

Consider the potential for real-time manipulation of visual media through technologies that alter speech and appearance in broadcasts. Such capabilities necessitate a vigilant approach to authenticity and ethical standards, especially as the line between reality and fabrication becomes increasingly tenuous.

It’s incumbent upon us to recognize that while technological progress offers unprecedented opportunities to enhance the human condition, it simultaneously presents challenges that must be judiciously managed. By fostering a conscientious and informed approach to these developments, we can strive to ensure that the fruits of innovation contribute positively to society at large.

In the end, we’re not just teaching machines to think—we’re redefining what thought means in the digital age. And perhaps the most critical question we need to ask ourselves is: Are we prepared to handle the answers that artificial intelligence will provide?