The place of “beings” endowed with Artificial Intelligence in the future of the world

Summary

In recent months, developers of artificial intelligence programs have warned about the potential harm this technology may cause. However, it remains unclear whether machines will ever match or surpass human intelligence. Despite the complexity and efficiency of data computing operations, they may not be comparable to our cognitive functioning. A catastrophic and defensive narrative is currently being pushed in our society, which differs from narratives in other contemporary societies. This work compares three narratives (Golem, Frankenstein, and Kami) to analyze which best fits the intelligent robots Tars and Case in Christopher Nolan’s film Interstellar (2014). Finally, I propose a “mirror” model of artificial intelligence to help us better understand the rapid development of this technology in various countries.

Introduction

Many consider Interstellar (Nolan, 2014) a benchmark in science fiction for its realistic depiction of Einstein’s theory of relativity. However, the film also portrays ethically questionable behaviors. Most characters lie to achieve their goals. Moreover, having destroyed our planet, we seek another to continue our lifestyle without changing our habits or values, potentially harming other environments that might contain life forms or interact with humans in unpredictable ways.

Despite these concerns, the film champions love as the universe’s driving force, rising above the mediocrity and decadence of this dystopian society. Love appears as a force we can intuit but cannot quantify or control scientifically. Matter and its governing laws yield to this powerful dimension of love.

Interstellar (Nolan, 2014) introduces several concepts currently debated in physics, including bulk beings. Doyle (1:00:37) describes bulk as the space beyond three dimensions. Bulk beings transcend these dimensions and move through multidimensional spacetime. In the film, these beings inhabit a five-dimensional universe.

Einstein’s general theory of relativity suggests that gravity can bend spacetime. Interstellar extends this idea, asserting that gravity can cross dimensions like time (Dr. Brand, 1:15:06). This implies that beings at different points in time could communicate through gravity. Future humans, evolved to use a five-dimensional reality, communicate with Cooper and Murphy to save humanity from self-destruction.

To establish this communication, they create a tesseract—a five-dimensional space where time acts as another spatial dimension. They require Cooper to find the exact moment within this complex space to connect with his daughter and transmit the necessary quantum data. Cooper’s love for his daughter enables him to locate that precise moment.

In this scenario, quantum data is essential to save humanity. This data remains hidden from current humans, as it resides within a black hole (Gargantua). Bulk beings possess this information due to their existence in higher spacetime dimensions. They can communicate with Cooper through a tesseract featuring two additional spatial dimensions. However, they lack the love necessary to find the right moment to transmit the information. Cooper, driven by his love, can identify that moment. For him, love imbues our actions with meaning; without it, the universe feels like a vast prison.

Tars, an AI-equipped robot, accompanies Cooper throughout his journey. In Interstellar (Nolan, 2014), Tars exhibits cooperation and can display irony, humor, sincerity, and discretion, all while following human instructions. What role does Tars play in these crucial moments for humanity’s future?

1. The Current Moment of AI

Leading developers in the technology sector have warned of the dangers posed by AI that could surpass humans in many respects, potentially threatening humanity’s future.

A manifesto published by the Future of Life Institute (2023) features signatures from top scientists and developers, stating that AI can “pose profound risks to society and humanity” and might represent “a profound change in life on Earth.” They argue that we must plan and manage AI development with appropriate care and resources. The signatories believe that current planning and management are inadequate, as no one can fully understand, predict, or control these “increasingly powerful digital minds,” including their own creators. They emphasize the need to pause and reflect on how we proceed in this field.

The signatories also question the extent to which we should allow machines to overwhelm our information channels with propaganda and misinformation. They raise concerns about developing machines that could surpass us, warning against risking the loss of control over our civilization. They argue that decisions of this magnitude should not rest in the hands of “technological leaders not elected” by the public, even if some signatories belong to this group.

They advocate for limiting, pausing, or controlling technological development due to the potential risks posed to our future. They assert that now is the time to pause until protocols are “safe beyond a reasonable doubt.” During this hiatus, they propose creating robust AI governance systems, including new regulatory authorities, monitoring systems for high-capacity AI, provenance and watermarking systems, and public funding for AI safety research.

While these authors express concern about AI development, they may not fully grasp the difficulties in achieving consensus among researchers and nations regarding a moratorium. If AI is indeed as dangerous as suggested, we face a significant issue. Past failures to limit the proliferation of weapons of mass destruction and the development of genetic engineering techniques illustrate this challenge.

Another recently published manifesto, signed by scientists and creators of the most powerful AI systems, states: “Mitigating the risk of AI extinction should be a global priority, alongside other societal risks such as pandemics and nuclear war.” Although it seems unethical for the creators of powerful AI tools to warn us of the risks and call for regulation, it remains essential to question the safety and appropriateness of ongoing developments.

According to a study published by the European Parliament Research Service (Study Panel for the Future of Science and Technology, 2020), the risks of AI can be grouped into twelve categories:

  1. Human Rights and Well-being: Does AI benefit humanity and human well-being?
  2. Emotional Harm: Will AI degrade the integrity of human emotional experiences?
  3. Accountability and Responsibility: Who is responsible for AI actions?
  4. Security and Transparency: How do we ensure accessibility and transparency while safeguarding privacy?
  5. Safety and Trust: What if the public finds AI untrustworthy or threatening?
  6. Social Justice: How do we ensure AI is inclusive and free from bias?
  7. Economic Harms: How will AI impact economic opportunities and employment?
  8. Legality and Fairness: How can we ensure AI operates within lawful and equitable frameworks?
  9. Ethical Use: How can we prevent unethical use of AI and ensure human control?
  10. Environmental Damage: How can we protect against environmental harm linked to AI development?
  11. Informed Use: How do we keep the public educated about AI interactions?
  12. Existential Risk: How can we prevent an AI arms race and ensure manageable advancements in AI?

This list highlights the many risks AI poses to society and humanity. This cautious view contrasts sharply with Tars’s kind demeanor in Interstellar (Nolan, 2014) and the portrayal of other helpful robots in science fiction.

2. Narratives About AI

Understanding what an AI system can and cannot do is essential. Machines have successfully outperformed humans in chess and Go, demonstrating strategic development (Coeckelbergh, 2020, p. 15). During training, these machines played against themselves to learn from mistakes and improve techniques, ultimately defeating human opponents.

Although these applications may seem trivial, they raise concerns that machines could one day manage traffic, address health issues, or predict and control the weather (Global Partnership on AI Report, 2021, p. 17). Many fear that machines could take control of our lives and turn into our worst nightmare.

To grasp the potential impact of intelligent machines, we should examine their current roles. AI systems already function in our latest generation computers, smartphones, social networks, facial recognition systems, banking, healthcare, education, and entertainment. AI integrates silently and almost invisibly into our daily lives. For instance, artificial vision systems likely ensured the quality of the tiles in our bathrooms and sorted cherries for defects before reaching our shopping baskets.

In the virtual realm, bots like Lsjbot contribute significantly to platforms like Wikipedia, generating thousands of articles daily. Human editors oversee the accuracy of these articles before publication. This collaboration between humans and bots raises questions about the reliability of information increasingly entrusted to machines (Lafuente, 2011, p. 15).

Intelligent machines can read faces and interpret emotions by cross-referencing data from various sources. They extract vast amounts of information about us (Coeckelbergh, 2020, p. 18).

The rapid development of AI generates many ethical dilemmas. Although programmers often aim to use AI for good, the complexity of these systems can lead to unintended consequences. This reality prompts new conceptions of our future and alerts us to potential risks.

Currently, a dominant narrative suggests that machines will soon achieve superintelligence, leading them to control our lives. Some transhumanists view this as a dream, while many others see it as a nightmare (Coeckelbergh, 2020, p. 23). This narrative warns of a technological singularity, where AI surpasses human understanding, relegating humanity to a subordinate role. In this scenario, machines could continue to evolve, distancing themselves from their human creators (Coeckelbergh, 2020, pp. 23-24). Under such conditions, the only viable future for humanity might involve merging with machines to overcome our biological limitations (Coeckelbergh, 2020, p. 24).

However, if biology is a limitation, would superintelligent machines seek to merge with us? Would it not be more efficient for them to create entirely non-organic beings? (Coeckelbergh, 2020, p. 25).

Some transhumanists propose that to mitigate these risks, we must evolve into more intelligent beings, becoming homo deus—humans with god-like attributes. They argue that we must enhance our “human machine” before intelligent machines surpass us. If we fail, we risk becoming slower, less efficient, and more vulnerable than these advanced machines (Coeckelbergh, 2020, p. 25).

Conversely, many scholars assert that machine intelligence will always remain partial and limited. It is unlikely that AI can ever exceed human intelligence. Thus, the development of AI is less promising than some predict. AI systems may ultimately serve as assistants or collaborators, as seen with Wikipedia bots or Tars in Interstellar (Nolan, 2014). For these authors, focusing on hypothetical dystopian futures distracts us from the ethical, political, and social challenges AI poses today.

The narrative of a future where humanity is threatened by AI is deeply ingrained in our society. It resonates with our fundamental fears: the rebellion of a creation against its creator, as seen in stories like Adam and Eve’s fall or the tale of Oedipus. This perspective also aligns with a linear view of time that sees an origin (Genesis, the Big Bang) and an end (the Apocalypse, the Big Freeze).

The central question arises: will the future shaped by AI resemble that of the Golem or Frankenstein? Are there alternative models, particularly from other cultures, that might inform the current debate?

The Golem, from Jewish mythology, represents an inanimate being created from inert matter, often clay. In many Central European traditions, a holy person—typically an exemplary rabbi—creates the Golem. While this rabbi can breathe life into it, similar to God with Adam, the Golem remains a shadow of its creator, lacking a soul and, consequently, intelligence (Judensnaider, 2011, p. 71). This creature possesses strengths and weaknesses: it is strong, obedient, and efficient but lacks the capacity for thought or speech. A notable anecdote illustrates its potential dangers; when tasked to fetch water, the Golem diverted an entire river, causing a flood (Judensnaider, 2011, p. 76-77). Its activation and deactivation methods highlight its limitations: a sacred word activates it, while erasing a letter transforms it into lifeless clay. Thus, colloquially, “Golem” refers to someone acting mechanically, devoid of thought. However, under certain circumstances, the Golem can rebel, exhibiting human-like traits such as laughter, sadness, and a desire for identity (Judensnaider, 2011, p. 77). When it begins to show signs of spiritual maturity, it must be deactivated (Judensnaider, 2011, p. 68).

Supporters of the divine-like nature of intelligent creations echo Judensnaider’s assertion:

“They were Creators. All of them. They were concerned with knowing Creation, and they understood and created. They created artifacts to know the Heavens, they designed to portray the human body, built instruments of war, and observed the world. In laboratories, they distilled and manipulated secret substances, creating magical recipes. In the depths of the temples, they murmured prayers and transformed the world.” (Judensnaider, 2011, p. 69).

To achieve their goals, these creators needed to grasp God’s creative process. They utilized mathematics to gain a deeper understanding of humans and the world. Their aim was to comprehend God’s omnipotence, not to challenge it (Judensnaider, 2011, p. 77). Thus, when their creation matured spiritually, it required neutralization; the experiment had to end.

Conversely, the story of Frankenstein illustrates how the creation of intelligent life can spiral out of control. Here, the act of creating life becomes a modern experiment, devoid of the mysticism surrounding the Golem (Coeckelbergh, 2020, p. 29). Victor Frankenstein, the creator, overreaches and ultimately loses control over his creation, rejecting it and failing to provide necessary care. While the Golem can be stopped in time, Frankenstein’s creation escapes its creator’s grasp. Both narratives caution against the dangers of overstepping boundaries with divine power or scientific exploration.

Nick Bostrom also highlights these dangers through the unfinished fable of the sparrows. In this story, sparrows discuss the merits of inviting owls to assist them. Some see owls as beneficial allies, while one cautious sparrow warns of the risks. Most dismiss the concern, eager to pursue the idea. Only a few hold back, worried about the potential consequences (Bostrom, 2014, pp. 15-16).

These narratives reflect a competitive view of AI, suggesting technology could dominate humanity. But are there alternative perspectives?

In Eastern cultures, technology is not seen as a threat. In Japan, particularly within Shintoism, technology is viewed as part of nature. Here, intelligent machines, especially those with superior capabilities, may be considered to possess kami, or spirit. Shintoism posits that everything—living and non-living—has kami, with higher spirits deserving of reverence. This belief system is not strictly religious; it embodies a way of life ingrained in Japanese culture (Ono, 1962, p. 21).

In Shintoism, harmonious cooperation among beings endowed with kami is essential. The term kami conveys respect and implies that all beings have the potential for growth and development. Thus, AI, seen through this lens, is a positive force aligned with a worldview that emphasizes collaboration rather than competition.

Shintoism advocates living harmoniously and cooperatively, believing that every individual contributes to the world’s creation through their actions. Even the Imperial family, regarded as possessing enlightened kami, consults other spirits for guidance (Ono, 1962, p. 27). Material wealth is not viewed negatively, as it is part of life’s natural development, provided it serves the common good.

This perspective fosters a positive view of AI in Japanese society. Rather than viewing it as a competitor, many see intelligent machines as collaborators, with whom they can coexist harmoniously.

The European Union’s draft regulation on AI reflects a defensive stance, highlighting the need to mitigate risks associated with AI. It aims to ensure that AI systems comply with fundamental rights and values, emphasizing legal certainty and safety (European Commission, 2021, p. 3). The document outlines potential risks and dedicates pages to regulations aimed at protecting citizens.

In contrast, Japan’s 2019 “Social Principles of Human-Centric AI” report views AI as a crucial technology to address societal issues, such as declining birth rates and labor shortages (Cabinet Secretariat, Government of Japan, 2019, p. 1). The document emphasizes AI’s potential for public good, advocating three guiding principles:

  1. Respect for Human Dignity: Avoid over-dependence on AI and ensure it enhances human creativity and well-being.
  2. Inclusive Society: AI should facilitate the creation of new values, adapting to citizens’ needs.
  3. Sustainable Society: AI can drive new businesses and address social inequalities, contributing to environmental sustainability.

The 2022 AI Strategy reiterates that while AI cannot single-handedly solve societal issues, it can serve as a valuable ally when integrated with human efforts (Cabinet Secretariat, Government of Japan, 2022, p. 6).

Both the EU and Japanese regulations acknowledge AI’s risks and opportunities. However, Japan’s approach is more optimistic, viewing AI as a means to enhance human dignity, well-being, and sustainability rather than as a threat.

Ultimately, AI should not be seen as a danger to our values or way of life, but rather as a tool to strengthen these principles. The narratives surrounding AI diverge sharply: one portrays it as a potential risk, while the other emphasizes collaboration.

So, which narrative aligns more closely with reality? Initially, we should regard AI not as a risk but as a product of human programming designed to fulfill specific functions. If, as Bostrom warned, we create a super-intelligent AI prioritizing trivial goals over human welfare, it could lead to dire consequences. Yet, such an outcome seems improbable, as humans generally prioritize consensus around dignity, justice, and sustainability.

History shows that while humanity has faced risks in innovation—be it with fire or domestication—each effort to harness potential has also yielded significant benefits. The journey with technology often involves trial and error, but the instinct to innovate and improve persists. Intelligent machines will not dominate us unless we allow it, and societal collaboration can steer us toward a more harmonious future.

In summary, we can view intelligent machines through three lenses:

  1. The Golem: A creation that may evolve and pose risks, requiring careful oversight.
  2. Frankenstein: A creation that spirals out of control, reflecting the dangers of neglect and overreach.
  3. Collaborative Beings: Entities with potential for coexistence, deserving recognition and cooperation.

Recognizing the balance of risks and opportunities with AI encourages us to see it as a partner in building the world we aspire to create.

As Kitabatake Chikafusa remarked in his work, AI mirrors our intentions and actions, reflecting our potential for both growth and responsibility.

“The mirror hides nothing. It shines without a selfish mind. All that is good and bad, right and wrong, is reflected without exception. The mirror is the source of honesty because it has the virtue of responding according to the form of objects. It shows the fairness and impartiality of the divine will” (Chikafusa; quoted in Ono, 1962, p. 52).

What frightens many people most about AI is that it reflects the contradictions and fears of our society.

3. Tars: The Intelligent Robot in Interstellar

After exploring various narratives about AI, we can analyze which model—Golem, Frankenstein, kami, or mirror—best represents the intelligent behavior of Tars (and his companion, the robot Case) in the film Interstellar (Nolan, 2014).

Tars first appears in the film (minute 24:38) displaying martial behavior, alarming both protagonist Cooper and his daughter Murphy as Cooper attempts to illegally enter a military base. Tars subdues Cooper, separates him from Murphy, and threatens, “Don’t make me subdue you again; sit down.” When Dr. Amelia Brand arrives, she instructs Tars to back off, and he immediately complies.

At minute 42:33, as the spacecraft prepares for launch, Tars states, “Ready for first stage separation,” then jokes, “Everything okay? Enough slaves for my robot colony?” Cooper’s astonished expression prompts his partner Doyle to explain, “He was programmed with a sense of humor to fit in better with his unit. He thinks it relaxes us.” Cooper replies, “A giant sarcastic robot! What a great idea!” Tars chimes in, “He can turn on a little light when he’s joking if he wants.” Cooper agrees that would be nice, but Tars continues, “It will help him get back to the ship when he’s ejected through the airlock.” When Cooper asks what level his humor is set to, Tars replies, “100%,” prompting Cooper to request a reduction to 75%.

This sequence shifts the narrative from the expectation of a hostile “Frankenstein”-type robot to one that is cooperative and seemingly interested in making the journey pleasant.

At minute 47:58, Tars encounters Case, and they greet each other briefly: “Hello Case,” “Hello Tars.” This interaction suggests the film’s focus is not on robot awareness but rather on their functional roles.

At 52:40, Cooper asks Tars to review the trajectory. As Tars provides data, Cooper whispers a question about Dr. Brand and crew member Edmund. Tars interrupts, “Why are you whispering? They can’t hear you.” When Cooper inquires about their relationship, Tars discreetly responds, “I couldn’t tell you.” Cooper asks if his sincerity is at 90% or 10%, to which Tars replies, “I also have discretion parameters, Cooper.” This highlights that Tars’s behaviors—whether martial, humorous, or discreet—stem from programming rather than any inherent “soul” or “consciousness.”

At 1:11:27, Case protects Dr. Brand during an emergency takeoff, demonstrating their ability to crew the ship. At minute 1:29:08, Tars plots a course to Dr. Mann’s planet.

The robots respect the human-machine hierarchy, acting within their authorized limits. At 1:58:30, Tars informs Romily that “there is an access block… A person is needed to access the functions.” Romily acknowledges this, and Tars says, “All yours, sir,” allowing Romily to access the desired data.

The robots operate based on strictly rational, calculating parameters. When Dr. Mann’s ship explodes while docking with the Endurance orbital station, it rotates dangerously toward Cooper and the crew. Case warns at 2:08:00: “Cooper, there’s no point in wasting fuel on that,” but Cooper insists, “Perform the rotation of the Endurance.” Even though Case believes continuing the maneuver doesn’t make sense, both robots assist with the docking, remaining functional despite the humans appearing dizzy.

As they near the gravitational pull of the Gargantua black hole, Tars asks if he should use the main engines to control the ship. Cooper replies they must get as close as possible, again illustrating Tars and Case’s literal reasoning compared to Cooper’s anticipatory judgment.

At 2:13:51, Cooper proposes using Gargantua’s gravitational pull for propulsion, which involves Tars detaching from the module. Dr. Brand questions why Tars must let go, and Cooper explains it’s necessary to escape gravity. Tars lightens the mood, joking about Newton’s third law: “The only way humans know how to get anywhere is to leave something behind.” Dr. Brand protests, but Tars reassures her: “It was the plan, Dr. Brand. The only way to save the people of Earth. If I can transmit the quantum data I find there, maybe we can save them.” This moment subtly acknowledges Tars’s value, though Cooper’s willingness to sacrifice him reflects his overarching priorities.

Eventually, Cooper enters the black hole and finds himself in a tesseract (a multi-dimensional space). At 2:26:54, he reconnects with Tars. When Tars asks if he can hear him, Cooper responds, “You survived!” indicating the human tendency to attribute life-like qualities to robots. Their subsequent conversation revolves around the complex situation, with Cooper drawing understanding from his love for his daughter, while Tars mainly listens and nods.

Towards the film’s end, Cooper confesses to Tars that he wishes to know where humanity stands and where it is going. Tars remain silent, accompanying Cooper on his new expedition.

4. Tars as a Reflection of Humanity

In a 2014 interview, Christopher Nolan emphasized a realistic portrayal of an intelligent robot. He aimed for Tars to be a strong, minimalist machine, devoid of anthropomorphic traits yet equipped with speech and personality. While initially, the screenplay featured more robot-centric narratives, Nolan chose to highlight human decision-making at critical moments, illustrating that robots may excel in physical tasks and obey commands, but human intuition and adaptability are irreplaceable (Associated Press, 2014).

In the case of Tars and Case, their pragmatism can lead to errors due to an overreliance on data, reflecting a lack of authentic emotions. Unlike Cooper, driven by the desire to reunite with his daughter, robots lack the emotional incentive to find meaning in their actions (Esteve-Martin & Vidal-Lopez, 2021, p. 25). Intelligent machines do not inherently seek purpose or understand their existence as a framework for growth; they operate solely based on programming (Vidal-López, 2023b, p. 175).

This perspective likens intelligent robots to children—capable yet prone to significant errors. Unlike the Golem, whose potential for spiritual evolution exists, the risks associated with Tars and Case stem solely from their programming. The scientists attempting to save the world do not exemplify moral goodness; their actions are utilitarian and often questioned. The film ultimately conveys that love—not deceit—might be humanity’s salvation.

There is a critical existential aspect shared by intelligent machines and humans: as the Japanese government posits, AI, despite its sophistication, requires human input to derive meaning (Cabinet Secretariat, Government of Japan, 2022, p. 6). This necessity reinforces the idea that both humans and intelligent machines are inherently social beings. Historical figures like Ono and Aristotle recognized that humans cannot thrive in isolation (Ono, 1962, p. 171). Dr. Mann’s mental deterioration in Interstellar exemplifies the effects of isolation. Both humans and intelligent machines depend on social interaction to find meaning.

The concerns regarding AI—such as data protection and security—are not reflected in the film. The image of Frankenstein as a creature rebelling against its creator is absent; instead, the Earth itself represents a modern version of this idea, growing increasingly hostile due to human excesses. The narrative suggests that humanity must radically change its way of life or face dire consequences, echoing contemporary debates in Western society. Nolan nods to the fear of robots dominating humans through Tars’s initial martial behavior and his humorous comment about building a “robot colony.”

The kami perspective on AI, focusing on harmony (as Tars attempts to ease tension) and cooperation (humans and robots working together), is evident throughout the film. There is mutual respect rather than veneration; Dr. Brand believes it’s unfair for Tars to be sacrificed without justification, and Cooper’s decision to do so reflects a calculated utilitarianism. The collaborative effort underscores the notion that both humans and robots strive for the common good, even in extreme circumstances.

Ultimately, the AI we develop will mirror ourselves. In societies marked by injustice, these traits will be amplified. In environments filled with fear or rejection, these emotions will intensify. However, in societies that trust in the innate goodness of their citizens, AI can become a valuable ally in pursuing the common good through cooperation and harmony.

While nothing is devoid of risk—just as water is essential for life yet can lead to drowning—AI serves as yet another mirror reflecting our essence. As Chikafusa observed in the 14th century, the mirror reveals “the fairness and impartiality of the divine will” (Chikafusa; cited in Ono, 1962, p. 52). It is up to each of us to align our actions with that will and strive for a better world, with or without AI.

Literature

  • Associated Press. (2014). Interstellar director Christopher Nolan says he had a very particular vision for the robots. The National, pág. 1.
  • Bostrom, N. (2014). Superinteligencia, caminos, peligros, estrategias. Oxford: Oxford University Press.
  • Center for AI Safety. (2023). Statement on AI Risk. Obtenido de Center for Ai Safety: https://www.safe.ai/statement-on-ai-risk
  • Coeckelbergh, M. (2020). Ética de la Inteligencia Artificial. Madrid: Cátedra.
  • Comisión Europea. (2021). Propuesta de Reglamento del Parlamento Europeo y del Consejo por el que se establecen las normas armonizadas en materia de Inteligencia Artificial (Ley de Inteligencia Artificial) y se modifican determinados actos legislativos de la Unión. Bruselas: Comisión Europea.
  • Esteve-Martín, A., & Vidal-López, J. (2021). Análisis conceptual de la sensibilidad: de la sensación al sentimiento. Quién, 14, 25-48.
  • Future of Life Institute. (22 de 3 de 2023). Pause Giant AI Experiments: An Open Letter. Obtained from https://futureoflife.org/: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
  • Global Partnership on AI Report. (2021). Climate Change and AI. Global Partnership on AI Report In collaboration with Climate Change AI and the Centre for AI & Climate.
  • Judensnaider, I. (2011). El Maharal y la creación del Gólem. Prometeica – Revista de Filosofía y Ciencias, 68-80.
  • Lafuente, A. (2011). Prólogo. En F. Ortega, & J. Rodríguez, El Potlatch Digital. Wikipedia y el triunfo del procomún y el conocimiento compartido (págs. 9-18). Madrid: Cátedra.
  • Ono, S. (1962). Shinto. The Kami way. Tokio: Tuttle Publishing.
  • Cabinet Secretariat (2019). Social Principles of Human-Centric AI. Tokyo: Government of Japan.
  • Cabinet Secretariat (2022). AI Strategy. Tokyo: Government of Japan.
  • Shelley, M. (2021). Frankenstein. Almería: Adoro Leer.
  • Study Panel for the Future of Science and Technology. (2020). The ethics of artificial intelligence: Issues and initiatives. Bruselas: European Parlamentary Research Service.
  • Vidal-López, J. (2023). El tiempo existencial y el sentido de la vida ante la presencia cercana de la muerte. En A. Esteve-Martín, Claves para la alianza entre filosofía y cine (págs. 175-199). Madrid: Dykinson.
  • Wikipedia. (2023). Bot of Wikepedia. Obtained from Wikipedia: https://es.wikipedia.org/wiki/Bot_de_Wikipedia

Filmography

  • Nolan, C. (Director). (2014). Interstellar [Film].

Leave a Reply

Your email address will not be published. Required fields are marked *

About Company

We are an online institution certified by Florida State specializing in new technologies with an innovative and integrative approach.

Most Recent Posts

  • All Posts
  • Artificial Intelligence
  • Ciencia Espacial
  • Collaborations & Educational Partnerships
  • Digital Marketing and Communication
  • Inteligencia Artificial
  • Marketing Digital y Comunicación
  • Neurociencia en los Negocios
  • Neuroscience in Business
  • Space Science

Category

Tags

Contact