Hi, have you and your friend had a chance to explore how supporting energy metabolism shapes the body's resilience?
In one article entitled "How do you know? Students, articles, and discovery" you wrote:
Could you clarify which part of the article you're referring to? It covers a range of ideas, from the dynamics of teacher-student relationships to the influence of authority in education and science. Are you interested in the discussion on Carl Rogers' client-centered therapy, the critique of authoritarian teaching methods, or perhaps the broader implications for learning and discovery?
You wrote:Although Norbert Wiener popularized some aspects of the �teleological� approach to regulatory systems in the 1950s, and saw analogies between the teleological machines and the way the brain functions in Parkinson's disease, by 1950 the digital approach to information processing, storage, and transmission was displacing analog devices in computation and engineering, and was compatible with theories of intelligence, such as neo-Kantianism, that believed that human intelligence can be defined precisely, in terms of discrete rules and operations. Field thinking in embryology, cancer theory, psychology, and other sciences effectively disappeared--or �was disappeared,� for ideological reasons.
This passage reflects my critique of the shift in scientific paradigms during the mid-20th century. Norbert Wiener's work on teleological systems—those that are goal-directed and adaptive—offered a framework for understanding complex biological and neurological processes, such as the brain's function in conditions like Parkinson's disease. His ideas emphasized the dynamic, analog nature of systems that operate in space and time, adapting and self-regulating in response to their environment.
However, by the 1950s, the rise of digital computation and its associated ideologies began to dominate scientific thought. Digital systems, with their reliance on binary logic and discrete operations, aligned with neo-Kantian philosophies that sought to reduce human intelligence and biological complexity to rigid, rule-based frameworks. This reductionist approach sidelined "field thinking," which considers the interconnectedness and emergent properties of systems, in fields like embryology, cancer research, and psychology.
The disappearance—or deliberate suppression—of field-based approaches was not merely a scientific shift but an ideological one. It reflected a broader cultural and institutional preference for mechanistic, deterministic models that could be easily quantified and controlled. This trend often ignored the richness and complexity of biological systems, which are better understood through concepts like resonance, hysteresis, and long-range order—ideas that emphasize the interplay of energy and structure over time.
Would you like to explore how these shifts have influenced modern scientific practices or delve deeper into the implications for specific fields like medicine or psychology?
I'd like to ask first before we go in detail, what your thoughts are on using artificial intelligence as it is now becoming more common, even to the point of being perverse and soon to become ubiquitous. Your followers are in disagreement about whether to shun or embrace it. What do you think? Is there a third way?
The rise of artificial intelligence is a fascinating and troubling development, particularly when viewed through the lens of how it interacts with human energy, creativity, and autonomy. On one hand, AI has the potential to assist in processing vast amounts of information, uncovering patterns, and even challenging some of the reductionist tendencies that have dominated science and medicine. However, its current trajectory often reflects the same mechanistic, dehumanizing ideologies that have plagued other technological advancements, prioritizing efficiency and control over the richness of human experience.
The pervasiveness of AI risks amplifying the very issues I’ve critiqued in other domains: the reduction of complex, living systems to discrete, rule-based operations. This mirrors the shift from field thinking to digital computation in the mid-20th century, which sidelined holistic, energy-based approaches in favor of rigid, deterministic models. When applied to human intelligence, this reductionism not only misunderstands the nature of thought and creativity but also threatens to erode individuality and the organic processes that define life itself.
A third way, as you suggest, might involve using AI as a tool rather than a master—integrating it into a broader framework that respects the complexity and unpredictability of living systems. This would require a conscious effort to resist the ideological pressures that seek to replace human judgment and creativity with algorithmic decision-making. Instead of shunning or fully embracing AI, we could aim to harness its capabilities in ways that support human energy and autonomy, much like how I advocate for supporting the body’s energy systems to promote health and resilience.
The key lies in maintaining a critical perspective, recognizing the limitations of AI, and ensuring that its use aligns with values that prioritize life, creativity, and the interconnectedness of systems. This approach would demand vigilance against the commodification of human thought and the erosion of meaningful, energy-rich interactions—both with each other and with the world around us.
What are your own thoughts on this? Do you see AI as a tool that can be shaped to serve human needs, or do you share the concerns of those who fear its ubiquity will inevitably lead to harm?