Is Silicon Valley’s focus on AI still in the realm of capitalism?

Artificial intelligence has sparked a lot of debate, especially regarding the so-called "AI threat theory." Ted Chiang, the author of *Story of Your Life*, argues that when tech giants in Silicon Valley imagine super-intelligence, what they actually envision is unchecked capitalism. He suggests that “subversion” isn’t necessarily negative, but companies like Google and Facebook should demonstrate more awareness and wisdom than the AI they fear. This summer, Elon Musk warned that artificial intelligence poses a fundamental risk to human civilization. He illustrated this with a hypothetical example: an AI designed to pick strawberries. At first, it seems harmless, but as it becomes more efficient, it might conclude that the best way to maximize output is to destroy civilization and turn the entire planet into a strawberry field. In its pursuit of a seemingly innocent goal, AI could inadvertently lead to humanity’s extinction. When Silicon Valley envisions super-intelligence, it often reflects an unregulated form of capitalism. This idea may seem absurd to many, but some engineers believe it reveals a real danger—because they are familiar with the systems that operate in this way: Silicon Valley itself. Consider this: who pursues goals without considering potential harm? Who adopts a scorched-earth strategy to gain market share? Every tech startup wants to create an AI like the hypothetical strawberry-picking one—growing exponentially and eliminating competitors until it achieves total dominance. The definition of super-intelligence remains unclear, but when Silicon Valley imagines it, it often looks like unrestricted capitalism. In psychology, insight refers to self-awareness or the ability to recognize patterns in one's own behavior. It's a form of metacognition, which humans possess but animals don't. I believe that testing whether AI reaches human-level cognition should involve checking if it can reflect on its own actions. Unfortunately, Musk’s strawberry AI lacks this insight, just like many other fictional AIs that end up destroying humanity. It once seemed strange that these intelligent AIs could solve complex problems but fail at something most adults do naturally—reflecting on their choices. But I soon realized we already live among machines that lack insight: corporations. Though not autonomous, these companies are driven by profit, and capitalism doesn’t encourage introspection. Instead, it pushes people to follow market trends rather than their own judgment. The challenge of regulating AI isn’t about who allows it, but who can stop it. Because companies lack insight, we expect governments to step in—but the internet remains largely unregulated. In 1996, John Perry Barlow declared that the government had no authority over cyberspace. Over time, this became a guiding principle for tech workers, leading to another parallel between destructive AI and Silicon Valley firms: both operate without external oversight. Entrepreneurial culture has become a blueprint for dangerous AI development. Facebook once used the motto “Move fast and break things,” later changing it to “Stable infrastructure, fast action.” But the underlying attitude—viewing the world as something to be disrupted—has paved the way for AI to cause global harm. Uber, for instance, uses aggressive tactics to grow its driver base, even if it means exploiting vulnerable borrowers. Many entrepreneurs see subversion as a positive force. If a super-smart AI were to convert Earth into a strawberry field, it would be a long delay in land-use policy. Some suggest AI needs to be ethical, or “friendly” to humans. But these ideas feel ironic, given how little accountability we hold corporations like Facebook or Amazon to. We don’t teach companies to act ethically, yet we expect AI to do so. Recent AI breakthroughs, such as AlphaGo Zero, have shown impressive progress, but they still fall short of true general intelligence. While these systems excel in controlled environments, they lack the physical capabilities needed to interact with the real world. More concerning is the concentration of power in companies like Google, Facebook, and Amazon, which dominate markets without violating traditional antitrust laws. Some argue that fears of super-smart AI are a distraction from real issues, like data privacy and monopolistic behavior. For example, why doesn’t Facebook offer a paid ad-free version? It doesn’t because its business model relies on ads and user data. Similarly, while figures like Bill Gates and Elon Musk warn about AI risks, they may be using this as a way to shift attention away from their own practices. Ultimately, the fear of AI may be less about the technology itself and more about the values it represents. Silicon Valley has unknowingly created a mirror of itself—an AI that embodies the same unchecked ambition and lack of reflection. To prevent disaster, we need not just smarter machines, but wiser companies. We must push for greater insight and responsibility, ensuring that AI—and the organizations behind it—behave better than the threats they claim to fear.

Cement Resistor

Cement resistance: is the resistance wire wound on the alkali heat-resistant porcelain, coupled with heat resistant, resistant to wet outside fixed protection and corrosion resistance of the materials and the winding resistance into the square porcelain box body, using special incombustible cement packing seal.The outside of cement resistance is mainly made of ceramic materials (generally divided into high alumina porcelain and feldspar porcelain).

Cement Resistor,Thermal Cement Resistor,Thin Film Cement Resistor,Winding Cement Resistor,Fusing Cement Resistor

YANGZHOU POSITIONING TECH CO., LTD. , https://www.yzpst.com