#한국: 포지트론솔루션 - 이맥스

#한국: 포지트론솔루션 - 이맥스

2024-06-17 Note korean notes positronsolutions

한국인인가? 대단한 이들이다

스폰서

Become a sponsor to Positron Solutions

@positron-solutions Positron Solutions, Korea, South

Our Vision

In less than a minute. Open innovation keeps progress moving by lowering the cost of new competition, enabling faster development, and creating market fluidity, both protecting consumer choice and accelerating advancement. 1분도 채 걸리지 않습니다. 개방형 혁신은 새로운 경쟁의 비용을 낮추고, 더 빠른 개발을 가능하게 하며, 시장의 유동성을 창출하여 소비자의 선택권을 보호하고 발전을 가속화함으로써 발전을 지속합니다.

We are in this for novel materials that replace plastics or enable enable overcoming tough engineering challenges, biomedical solutions that cure disease, multi-dimensional optimization problems that deliver better products, educational solutions that scale well beyond our collective expertise, and scientific exploration augmented with rigor at our fingertips. 플라스틱을 대체하거나 어려운 엔지니어링 과제를 극복할 수 있는 새로운 소재, 질병을 치료하는 생물의학 솔루션, 더 나은 제품을 제공하는 다차원 최적화 문제, 집단적 전문성을 넘어 확장 가능한 교육 솔루션, 손끝에서 엄격함으로 강화된 과학적 탐험을 위해 노력하고 있습니다.

Our Mission

To realize our vision, our first aim is to fix problems as old as Open Source: 비전을 실현하기 위한 첫 번째 목표는 오픈소스만큼이나 오래된 문제를 해결하는 것입니다:

The limits of legacy governance models that predate information technology and were not designed for the open communities inherent to the internet 정보 기술보다 앞서 있고 인터넷 고유의 개방형 커뮤니티를 위해 설계되지 않은 레거시 거버넌스 모델의 한계 The lack of a sales model that works between millions of perfectly willing customers and capable producers 수백만 명의 완벽한 의사를 가진 고객과 유능한 생산자 사이에서 작동하는 판매 모델의 부재 The inability to support successful software at scale 성공적인 소프트웨어를 대규모로 지원할 수 없음 Our YouTube Subscribe to our channel for updates. 채널을 구독하여 업데이트를 받아보세요.

Our Playlists

We have three playlists:

Open Innovation: Updates on Positron itself, critical ideas in Open Source, and the direction and consequences of technological shifts

오픈 이노베이션: 포지트론 자체에 대한 업데이트, 오픈소스의 중요한 아이디어, 기술 변화의 방향과 결과

Emacs at Lunch: Deeper dives into Emacs package development, extending packages, and customizing beyond the surface

점심시간의 이맥스: Emacs 패키지 개발, 패키지 확장 및 표면을 넘어선 사용자 지정에 대해 자세히 알아보기

Emacs Speedrun: Only the meta-information necessary to get on a sustainable, high-potential trajectory of Emacs self-development

Emacs 스피드 런: 지속 가능하고 잠재력이 높은 Emacs 자기 개발 궤도에 오르는 데 필요한 메타 정보만 제공합니다.

Our Emacs Work

In order to be active as a supplier of improvements to Emacs, Positron has been investing in packages like Dslide (Domain Specific sLIDEs, formerly Macro Slides), a presentation framework that enables software engineers to communicate better, using their programming tools, without wasting time on external software that provides marginally better display and much less overall capability. 소프트웨어 엔지니어가 디스플레이는 조금 더 좋고 전반적인 기능은 훨씬 떨어지는 외부 소프트웨어에 시간을 낭비하지 않고 프로그래밍 도구를 사용하여 더 나은 커뮤니케이션을 할 수 있는 프레젠테이션 프레임워크인 Dslide(도메인별 슬라이드, 이전의 매크로 슬라이드) 같은 패키지에 적극적으로 투자하고 있습니다.

@positron-solutions

@positron-solutions’s goal is to earn $5,000 per month

This goal is a reference level for when we bring on more co-founders with or without product revenue. Most hardcore co-founders work from savings or salaries far below market, so all contribution goes a long way for us.

이 목표는 제품 수익이 있든 없든 더 많은 공동 창업자를 영입할 때 참고할 수 있는 수준입니다. 대부분의 하드코어 공동 창업자들은 저축이나 시장보다 훨씬 낮은 급여로 일하기 때문에 모든 기여금은 우리에게 큰 도움이 됩니다.

“Living Programmably” 2023 Positron’s Emacs Channel

(Positron’s Emacs Channel 2023)

  • Learn about the Software Instrumentality and how the original bootstrapping and generative AI’s are coming together to create the perfect opportunity for you to meet the Celestial Emacscen by using a programmable interface to a computer. Buy me a coffee: https://github.com/sponsors/positron-… Check out the channel introduction: ~~$\bullet~$Welcome~to~Positron~Emacs~ For an example of living non-programmably, learn how to type blimpy in Emacs: https://github.com/progfolio/blimpy The LEM editor: https://github.com/lem-project/lem Human Instrumentality: https://en.wikipedia.org/wiki/Neon%5C_Ge… #emacs #programming #opensource TIMESTAMPS 00:00 Solving our own problems 01:16 Programmable software is here 03:11 Disruption is coming 04:00 Primrose paths 05:16 Other payoffs 06:05 Software Instrumentality 06:44 Celestial Emacsen 07:18 Outro

disruption 파괴?!

  • maturity creates integration

  • new value creation drives dis-integration

  • tools that are generic re-adapt faster

  • programs like emacs thrive on disruption

  • 성숙이 통합을 창출

  • 새로운 가치 창출이 탈통합을 주도

  • 일반적인 도구가 더 빠르게 재적응

  • 이맥과 같은 프로그램은 파괴를 통해 번창합니다

moving asymptote

[SYS]: You are a helpful assistant. Please write in Korean language.

[ME]: Make me a list of common components of cymbols used in Emacs List expressions, but interpret each component to Korean

[AI]: 알겠습니다! Emacs Lisp 표현식에서 흔히 사용되는 심볼들의 구성 요소를 한국어로 번역해 보겠습니다:

  1. 함수(function): 함수(function)는 어떤 연산을 수행하는 코드 블록입니다. Emacs Lisp에서는 함수를 정의하고 호출할 수 있습니다.

    • 예시: `defun`, `lambda`
  2. 변수(variable): 변수(variable)는 데이터를 저장하는 데 사용되는 메모리 공간입니다.

    • 예시: `setq`, `let`
  3. 조건문(condition statement): 조건문(condition statement)은 조건에 따라 코드의 실행 흐름을 제어합니다.

    • 예시: `if`, `cond`
  4. 반복문(loop statement): 반복문(loop statement)은 특정 조건이 만족될 때까지 코드를 반복해서 실행합니다.

    • 예시: `while`, `dolist`
  5. 리스트(list): 리스트(list)는 여러 요소를 순서대로 나열한 데이터 구조입니다.

    • 예시: `’()`, `list`, `car`, `cdr`
  6. 문자열(string): 문자열(string)은 텍스트 데이터를 나타냅니다.

    • 예시: `“Hello, World!”`, `concat`
  7. 숫자(number): 숫자(number)는 수치 데이터를 나타냅니다.

    • 예시: `+`, `-`, `*`, `/`
  8. 연산자(operator): 연산자(operator)는 두 개 이상의 피연산자에 대해 특정 연산을 수행합니다.

    • 예시: `<`, `>`, `=`, `and`, `or`
  9. 매크로(macro): 매크로(macro)는 코드 생성을 자동화하는 도구입니다.

    • 예시: `defmacro`, `macroexpand`

각 심볼은 Emacs Lisp에서 중요한 역할을 하며, 다양한 프로그램을 작성할 때 유용하게 사용될 수 있습니다.

[ME]:

Faster Ramp Up 이맥스를 배우기 정말 쉬운 시대

  • Translate docs
  • Semantic search
    • Coorection of question terminology
    • Finding what to search for
  • Explain code
  • Suggest solution approaches
    • quickly find bread & butter

Primrose Paths 이맥스 리스프 배우는 것

Lisp is Compact

  • Most of a packages is fluff
  • Hyper-popular packages gain mass

Found Everywhare

  • SBCL, Guix, Clojure, Racket
  • Scala, Haskell, Nix

The Spares

  • LEM the Common Lisp Emacs

  • Convergent evolution dominates

  • Alternatives just become better instances

  • 커먼 리스프 에맥스의 LEM

  • 수렴적 진화가 지배하다

  • 대안이 더 나은 인스턴스가 됩니다

You Better Hurry

  • AGI is turning all software into Tang : The software instrumentality

  • The celestial Emacsen are waiting

  • AGI는 모든 소프트웨어를 Tang으로 바꾸고 있습니다: 소프트웨어 도구

  • 천상의 에맥센이 기다리고 있습니다

speaker_note If I’m right, you want to start now. If I’m wrong, you will be in front of the wave. But either way it pays off in the long rung.

speaker_note 제가 맞다면 지금 시작하고 싶을 것이고, 제가 틀린다면 파도 앞에 서게 될 것입니다. 하지만 어느 쪽이든 긴 호흡으로 보면 좋은 결과를 얻을 수 있습니다.

2024-05-23 “Open Innovation: Using Emacs to Bring Depth to Open Source” 2024 Positron’s Emacs Channel

  • (NO_ITEM_DATA:openinnovationusing24a)

🤑

00:00 Intro

00:45 Depth vs Breadth in Open Source

02:10 Big Ways to Improve Emacs

05:00 Bringing Depth to Emacs While Bootstrapping Positron

2024-06-16 “Symbolic AGI: How the Natural Will Build the Formal” 2024 Positron’s Emacs Channel

(Positron’s Emacs Channel 2024)

도대체 뭘 하려는거야?!

#emacs #opensource #programming #machinelearning #logic

True AGI will combine formal and informal methods. People are already combining these tools in this way. M-x Jarvis in our time, but evolving Open Source is critical to delivering real value.

진정한 AGI는 공식적인 방법과 비공식적인 방법을 결합할 것입니다. 사람들은 이미 이런 방식으로 이러한 도구를 결합하고 있습니다. 하지만 진정한 가치를 제공하기 위해서는 오픈소스를 발전시키는 것이 중요합니다.

  1. Empirical argument that induction must be capable of emerging deductive and formal systems.

  2. Decoding to a less restricted but less consistent informal systems and then re-encoding to formal can identify new consistency.

  3. Formal systems can be used to induce coherence in informal systems, accelerating the search for new formal coherence.

  4. Both logic and metalanguage can naturally emerge by generalizing logical dependence and stripping away semantics

  5. If a self-model is exposed, the metalanguage capability implies self-programming capability.

  6. 귀납은 연역적 체계와 형식적 체계를 출현시킬 수 있어야 한다는 경험적 주장

  7. 덜 제한적이지만 일관성이 떨어지는 비공식 체계로 디코딩한 후 다시 공식 체계로 인코딩하면 새로운 일관성을 발견할 수 있음

  8. 공식 체계는 비공식 체계의 일관성을 유도하는 데 사용되어 새로운 공식 일관성 탐색을 가속화할 수 있습니다.

  9. 논리적 의존성을 일반화하고 의미를 제거함으로써 논리와 메타 언어가 자연스럽게 등장할 수 있습니다.

  10. 자체 모델이 노출되면 금속 언어 기능은 자체 프로그래밍

기능을 의미합니다.

  • Transformer animation visualizing layered attention

  • In and Out of love with Math] | 31b1 podcast #3

  • Curry Howard Correspondance

  • Kolmogorov Complexity

  • DeepSeek-Prover Automated Theorem Proving + LLMs

  • Automated Reasoning

  • Total Functional Programming

  • Stacked Restricted Boltzmann Machine

  • Fixed Point Combinator

  • Formal System

  • Syllogism 삼단논법

  • Incompleteness Theorem

  • Undefinability Theorem

  • Metalanguage

00:00 Intro 인공지능의 파괴적 발전에 대한 추론과 구현

  • 인공지능의 발전은 계속될 것이다.
  • 추론과 구현에 대한 세 가지 장은 발전의 원인을 제시한다.
  • 정확한 체계적 추론은 인공지능의 핵심이다.
This video came about because I needed an answer to the question, are we going to continue to see disruptive advances in artificial intelligence? The conclusion was yes, and the first three chapters present a deductive argument for why. If you are already familiar with formal systems and deductive reasoning, skip to the chapter about inducing deduction. The next few chapters sketch out how the implementation would actually work, what kinds of things are impossible, what work remains to be done, and what things are not problems at all. Finally, we're going to talk about how this fits into industry and why this is such a huge moment for open source. Okay, the first requirement we should expect of a precision AGI is that whenever a question has a clear, definite answer, that it should never get that answer wrong. It shouldn't depend on a random seed, it shouldn't depend on what other relationships are superimposed on some training output. So let's start by differentiating reasoning that gets things right most of the time from reasoning gets things right every time. This is the distinction between inductive and deductive reasoning. Inductive reasoning is pattern matching.

이 동영상은 인공지능의 파괴적인 발전이 계속될 것인가라는 질문에 대한 해답이 필요해서 만들게 되었습니다 결론은 ‘그렇다’였고, 처음 세 장에서는 그 이유에 대한 연역적 논증을 제시합니다. 형식적 체계와 연역적 추론에 이미 익숙하다면 연역적 추론 유도에 관한 장으로 건너뛰세요. 다음 몇 장에서는 구현이 실제로 어떻게 작동하는지, 어떤 종류의 작업이 불가능한지, 어떤 작업이 남아 있는지, 어떤 작업이 전혀 문제가 되지 않는지에 대해 설명합니다. 마지막으로, 이것이 산업에 어떻게 적용되고 왜 지금이 오픈소스에 중요한 순간인지에 대해 이야기할 것입니다. 정밀한 AGI에 기대해야 할 첫 번째 요건은 질문에 명확하고 확실한 답이 있을 때 그 답을 틀릴 수 없어야 한다는 것입니다. 임의의 시드에 의존해서는 안 되며, 학습 결과에 어떤 다른 관계가 겹쳐져 있는지에 따라 달라져서는 안 됩니다. 따라서 대부분의 경우 정답을 맞히는 추론과 매번 정답을 맞히는 추론을 구분하는 것부터 시작하겠습니다. 이것이 귀납적 추론과 연역적 추론의 차이점입니다. 귀납적 추론은 패턴 매칭입니다.

00:34 Deduction

00:59 Speaker 1

I see the event, I expect the event. It gets things more right whenever there is a strong correlation. Because it’s not concerned with how things tend to be true or why they tend to be true, inductive reasoning is very versatile, but it is vulnerable to things like correlation versus causation errors. In contrast, deductive reasoning is structured. It’s very concerned with how we arrive at a particular conclusion. And this disciplined structure allows us to be absolutely certain of the conclusions. For an example of this structure, let’s take a look at syllogisms, a kind of logic known since ancient Greek times. All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Once I tell you the first two facts, you can deduce the third with zero uncertainty. While there are many kinds of syllogism, if we just mix and match the parts, Not all of them are valid. For example, some cats are black things. Some black things are TVs. Therefore, some cats are TVs. Because the conclusion is false, even though the premises are true, this argument is invalid.

00:59 화자 1 이벤트를 보고, 이벤트를 예상합니다. 귀납적 추론은 강한 상관관계가 있을 때 더 정확한 결과를 도출합니다. 귀납적 추론은 사물이 어떻게 참인 경향이 있는지 또는 왜 참인 경향이 있는지에 관심이 없기 때문에 매우 다재다능하지만 상관관계와 인과관계 오류와 같은 것에 취약합니다. 이와는 대조적으로 연역적 추론은 구조화되어 있습니다. 연역적 추론은 특정 결론에 도달하는 방법에 매우 관심이 많습니다. 그리고 이러한 체계적인 구조를 통해 결론을 절대적으로 확신할 수 있습니다. 이 구조의 예로 고대 그리스 시대부터 알려진 일종의 논리인 삼단논법을 살펴봅시다. 모든 인간은 필멸자다. 소크라테스도 인간입니다. 따라서 소크라테스는 필멸자입니다. 제가 처음 두 가지 사실을 알려드리면 세 번째 사실도 불확실성 없이 추론할 수 있습니다. 많은 종류의 삼단논법이 있지만, 부분만 섞어서 사용하면 모든 삼단논법이 유효한 것은 아닙니다. 예를 들어, 어떤 고양이는 검은색입니다. 어떤 검은 사물은 TV입니다. 따라서 일부 고양이는 TV입니다. 결론이 거짓이므로 전제가 참이더라도 이 주장은 유효하지 않습니다.

01:57 Speaker 1

If an argument can transform true statements into a false conclusion, then it is an invalid argument. But if you have a valid argument and the premises are true, then the conclusion follows, and we call this sound. Whenever I’ve reached a conclusion, I can then reuse that conclusion in another argument. Because everything is truth-preserving, that becomes a new premise. And if I use another valid argument in composition, then we’re going to continue to get more and more truth. Even without any additional empirical facts, I can continue reasoning my way to more and more conclusions, gaining vastly more mileage for my knowledge. Okay, so we want to put our deduction into a computer. The first thing we’re going to need is a grammar so that we can have a consistent language. As you can see, grammars themselves are quite precisely defined and they enable us to understand and to generate sentences in the target language. Whenever we carry out reasoning through transforming these precise expressions, it is known as symbolic reasoning and it has a lot of desirable properties. Take the humble expression y mx b.

01:57 화자 1 어떤 논증이 참인 진술을 거짓 결론으로 바꿀 수 있다면 그것은 잘못된 논증입니다. 그러나 유효한 인수가 있고 전제가 참이면 결론이 따르고 우리는 이것을 건전하다고 부릅니다. 결론에 도달하면 다른 논증에서 그 결론을 재사용할 수 있습니다. 모든 것은 진리를 보존하기 때문에 새로운 전제가 됩니다. 그리고 또 다른 유효한 논거를 구성에 사용한다면 계속해서 더 많은 진리를 얻게 될 것입니다. 추가적인 경험적 사실 없이도 점점 더 많은 결론에 도달하는 추론을 계속할 수 있고, 지식에 대한 마일리지를 훨씬 더 많이 얻을 수 있습니다. 이제 우리의 추론을 컴퓨터에 입력해 보겠습니다. 가장 먼저 필요한 것은 일관된 언어를 갖기 위한 문법입니다. 보시다시피 문법 자체는 매우 정밀하게 정의되어 있으며 이를 통해 대상 언어로 문장을 이해하고 생성할 수 있습니다. 이러한 정확한 표현을 변형하여 추론을 수행하는 것을 기호 추론이라고 하며, 이는 많은 바람직한 속성을 가지고 있습니다. 겸손한 표현인 y mx b를 예로 들어보겠습니다.

02:30 Formal Systems

02:58 Speaker 1

If I try to express this in a neural network, it may require a lot of space and I won’t be able to do things like extrapolate to infinity. But when we use symbolic reasoning to transform these compact expressions themselves, we save a lot of space and computation. There’s no information loss, and we don’t need any empirical data in order to come up with or store the transformation. To talk about symbolic reasoning, here’s a quick refresh on formal systems. We start off with a grammar that allows us to parse or generate the syntactically valid sentences. These sentences are known as well-formed formulas. These are the expressions that can have meaning in our formal language. We transform these sentences into other sentences using inference rules. They have two properties: they always make another sentence in the language, and they preserve truth. So if the sentence is a true statement, then it’s going to be a true statement after the transformation. This process of using inference rules to go from A to B is called derivation. The formal analog to a conclusion, the sentence that we arrive at, is known as a theorem.

03:55 Speaker 1

In the initial statement, the transformations and the sentence we arrive at all together are known as a proof. Within a given formal system, there may be many ways to derive a particular theorem. And the idea is that we want all of these paths to arrive at the same conclusion. We never want to derive a contradiction. And this property is known as consistency. Consistency is the property that we are after. It’s what allows us to know that if we’ve done everything right, then the conclusion that we get to is significant in the system that we are studying. In order to interpret the theorems we prove, we need to use a model to map from the symbolic expressions back to some semantic meaning. If the system that we’re studying is concrete like the motion of the moon around the earth, then the theorems we derive should lead to interpretations that are consistent with what we observe. We are interested in predictive power. If those observations don’t match, then while the formal system may be consistent, the symbolic expressions mean things that just contradict reality and we don’t have a model with predictive power.

04:53 Speaker 1

However, if the system we are studying is purely abstract, like math, computer science, and logic, then there is no concrete reality with which to diverge, and as long as everything is consistent, then the predictive power of the model is basically inherent. A second property that is sometimes but not always sought after is called completeness. It’s the idea that for every true statement in the language, every theorem, I can derive that theorem using nothing but the inference rules and a common set of axioms. We can think of completeness as being able to reach every single true statement using inference rules branching out from the axioms. The axioms are a set of well-formed formulas that sit at the foundation of the system. They are selected so that they are unique, so that none of them are redundant, and because they are so simple, so self-evident, that they are inscrutable. We use completeness to provide a different set of guarantees, to answer a different set of questions. Have we covered all our bases? Can something exist? Or can it not exist?

05:58 Inducing Deduction

05:54 Speaker 1

But if I’m only interested in a certain range of predictive power, then completeness doesn’t give me anything extra. Alright, so now we have all the pieces necessary to encode perfectly consistent logic into our computer. Now let’s find out why that’s not going to go so well, and why it turns out that we need inactive accurate systems to build accurate ones, and why that means LLMs can enable us to lean over the tipping point. The first thing we glossed over was how do we get the grammar? How do we go from having naturally organized language to suddenly having precise syntax? But let’s just skip this for now and go get our grammar from other languages. The second problem is, when we’re deriving a theorem and we have multiple inference rules, how do we decide which inference rule is going to take us towards the goal? How do we even know where we’re trying to go? It turns out that unless the system is decidable, like a finite state machine or proposition logic, we’re relying on some heuristics or breadth-first search or a person sitting in a chair. But our goal is to automate everything, including the use of precise formal systems to identify new formal systems.

11:04 Spectral Reasoning

06:52 Speaker 1

So we’ll just assume that breadth-first search and heuristics are good enough. So it turns out that Kurt did a set of proofs that showed that for any consistent formal system of sufficient complexity. There are statements that are true in the language that cannot be proven from the axioms. And the consequence is that if I’m using a formal system to identify new formal systems, then there are going to be things that it should be able to decide that it just can’t. And the only way to acquire the ability to prove these other truths is to add more axioms to my system. So the question is, where do the axioms come from? A little bit more intuitively, Tarski’s undefinability theorem says that if we’re going to work on an object language, if we’re going to reason about what’s true in the object language, then we need to use a meta language, and that there are ideas and concepts in the meta language that don’t exist in the object language. So the question becomes, where does the meta language come from? If I need a meta language to reason about that language, where does that meta language come from?

07:52 Speaker 1

And so we’ve fully constructed the rabbit hole of trying to use formal systems to identify and develop new formal systems. Now maybe we’re just not very good at this and there actually is a way to bootstrap out of the rabbit hole some sort of fixed point calculation that’s self-referential yet consistent. Now I’m not aware that this work or any other work disproves the potential of such a thing, but I’m going to argue that it’s not necessary. Instead of answering this question, we can use an empirical argument, one based on evidence. Because early hominids did not have formal systems, and yet by the time of ancient Greece, philosophers like Euclid were able to basically formalize things like elementary geometry. So it must follow that either humanity has some innate capacity to formalize that just magically turned on one day, or inductive reasoning and natural language can achieve formalization. Regardless of what you believe about the first possibility, I think we can argue quite strongly that the second possibility must be true. We can look at deduction in the use of formal systems like being on a train that’s on a set of tracks.

08:52 Speaker 1

We can find all of the truths that are on that set of tracks. But if there’s another truth that’s not on our set of tracks, there’s no way for us to get there. We need to relax our restrictions a little bit to become a little bit more inductive, a little bit more natural. And with that lack of precision, because we can no longer follow the rules, even if we tried, we will sometimes find ourselves on another set of tracks, and then it becomes possible to find the other truths. It’s like the expression that a broken clock is only right twice a day, except in this case when we intersect with the truth, we’re going to identify consistency. Suddenly our models will have predictive power. It is to say that the lack of precision of natural language and the uncertainty of inductive reasoning can follow an accurate enough vector of suspicion to arrive at a new formal system, to induce deduction. Because this conclusion is so important, I want to corroborate it with a little bit more natural language. Here’s a quote from Steven Strogatz in the In and Out of Love with Math podcast. You first figure out what’s true.

09:51 Speaker 1

And then when you need to make it rigorous, that’s a second step. But just into like, what do you want to be true? That should be treated as a step worth highlighting and holding up as just as valuable as the rigorous follow-up. I’ve linked the podcast below, but what it’s describing is a phrase called morally true, where a mathematician suspects that something must be true, but hasn’t yet done the rigorous proof. This sounds a whole lot like some inductive reasoning starting to resonate with something that can be extracted and made rigorous as a deductive proof. So what I’m saying is that if we use a computer to refine inductive to deductive, that’s the same process that humans use. The major takeaway should be that we started off by demanding a precision AGI, one that could reason deductively and could formalize anything. But we concluded that in order to achieve that precision for any task, we require an imprecise system, one that uses inductive reasoning and natural language.

10:44 Speaker 1

This is why I believe the current crop of transformers’ capability to generate somewhat precise sentences and to transform them somewhat consistently is good enough to enable a breakthrough in AGI. The independent advances in image and video generation are neat, but I believe it is the sentence generation capabilities of LLMs that are pushing things beyond a tipping point. So instead of just saying that it’s here, let’s move to a constructivist approach and actually try to sketch something out. To establish our goal, we’ll start with some requirements, the first of which is to be able to formalize anything. As we just argued, this is going to require both natural and formal, both inductive and deductive capabilities. The second requirement is we want to have true chain of thought reasoning, and we’ll show that this becomes equivalent to having universal computation. Our third requirement is that we want to learn from the things that we formalize. That means when we create a new theorem, we don’t just want to put that in the output, we want it to be reflected in our future inductive, deductive, and formal reasoning.

15:38 Recursive Computation

11:39 Speaker 1

We will show that introspection is required for this capability, and that self-programming almost trivially emerges once it is achieved. To satisfy this first requirement of being able to formalize anything, we’re going to require the entire range of different kinds of reasoning. We need inductive, deductive, and symbolic reasoning. Because we’re going to cover the entire spectrum of different reasoning techniques, we might as well call this capability spectral reasoning. Every spectrum is defined on some dimension, such as how structured is the language. Inductive reasoning is carried out in a completely natural Symbolic reasoning is carried out in a language defined by a completely formal grammar. Deductive reasoning, defined somewhere in the middle, takes place in a semi-structured natural language. There is a correspondence between the level of structure and the level of semantic meaning. Whenever we’re using inductive reasoning in natural language, we’re using sentences that describe the actual things that we saw. They have full semantic meaning. But whenever we’re working with a symbolic language, the symbols don’t mean anything on their own. They are fully syntactic.

12:38 Speaker 1

Think about these correspondences for a moment. As we move from natural to formal, we also move from semantic to symbolic. We move from inductive reasoning to deductive reasoning, and finally to formal reasoning. We move from the empirically measured to the abstract. In all cases, as we move from the natural to the formal, we gain more predictive power. We become more consistent. When we transform the sentences, we get other true sentences. We can learn things and calculate things with less and less empirical experience behind them. When this implied knowledge aligns with observations, through consistency we gain more and more predictive power. And as the models themselves begin to show intersections where we have corroborated the ideas in one model with another model, we begin to gain rigor. We have not only formalized our knowledge but we have integrated it into a greater mass of intersecting formal systems that are consistent where they fit together, giving us greater and greater certainty of their predictive power. One way to use spectral reasoning is to emerge new symbolic representations from scratch.

13:38 Speaker 1

All knowledge begins as empirical observations. We merely see events. As we begin to see patterns of coincidence, we can use inductive reasoning to associate one event with the other. As it becomes clear that there’s dependency among these patterns, where there’s a one-way increase in the coincidence, we can assign these dependencies logical meaning. We have entered the deductive. And as the consistency of logic emerges, we can finally strip away the semantic meaning entirely and reason in a purely symbolic form. However, the real potential for spectral reasoning is that when we run into a dead end in the formal system, we can temporarily downgrade or decode our formal sentences to more natural sentences. Then if we carry out some chain of thought deductive reasoning with fewer restrictions, we can re-encode it as formal reasoning. If the new formal system works, we hold on to it. If it doesn’t work, or if it’s inconsistent, or its predictions contradict our observation, we can again decode that to more natural language and then use these findings to try again.

14:37 Speaker 1

To implement spectral reasoning, we would want to use a variety of programs. Whenever we’re working with symbolic expressions, we would use auto-referencing. automated theorem provers. Whenever we’re working with natural language and deductive reasoning, we would use something more like a fine-tuned LLM. And for working with problems of inductive reasoning, we could actually get away with even simpler neural networks and other heuristics. In order to bridge the gap between symbolic and natural language, we would mainly rely on LLMs that had been fine-tuned specifically to encode and decode expressions from symbolic expressions to add a little bit of semantic meaning to convert them into the kind of statements we can use with deductive reasoning. To train and fine-tune these networks, we will rely on large corpuses of known problems that we can run automated theorem provers on and large corpuses like Wikipedia that already contain lots of statements about formal systems. This kind of work is already going on by the way, there’s nothing hypothetical here.

15:32 Speaker 1

Now that I’ve already brought up a use case for chain of thought reasoning, let’s take a look at what this requires and how we would implement it. I think it’s helpful to start off by looking at a correspondence between theorem proving and computation. For a very formal treatment of this subject, you can look up the Curry-Howard correspondence. But for a more natural treatment of this correspondence, let’s first remember the truth preservation and composition of deductive arguments. If I have the conclusion of a deductive argument, I can treat that as the premise for further deductive arguments. And this truth preservation through composition just continues to scale out, reaching more and more true conclusions. The structure of composition in truth preservation is mirrored in the composition of pure functions. If I have a pure function that is correct over its entire range of inputs, I can reuse the output in another pure function and I am guaranteed that the ensemble is also correct. The only requirement is that the type signatures and the range of inputs match the capabilities of the functions.

22:32 Online Learning

16:30 Speaker 1

This is an incredibly powerful tool in building larger pieces of software. But the key point is that the structures are identical and that being able to carry out any form of reasoning is identical to being able to perform any computation. Reasoning and computation are not similar capabilities, they are identical capabilities. Now let’s look at the presence of pseudo-computation in the topology and behavior of existing neural networks. I’m going to start with an example of a stacked restricted Boltzmann machine. This is an older type of network that was the precursor to the kind of deep leaf networks that became popular during the deep learning era. Whenever we do an inference in a stacked RBM, we take the visible layer and we bring it into thermal equilibrium with a hidden layer. The two layers are exchanging information. This enables us to do a kind of pseudo-recursion. Whenever thermal equilibrium is almost reached, a heuristic is triggered, halting the current layer and moving on to the next pair of layers. However, if we look at the inference as a whole, we’ll see that we are still stepping forward through pairs of layers.

17:27 Speaker 1

We are performing a little bit of a calculation with a little bit of pseudo recursion in between, but the whole process is still basically a feed forward process. The significance of this is that if we have a process which requires five steps and there’s only three pairs of layers, then it’s difficult for the network to actually represent the true nature of that computation. Now let’s look for evidence of computation and recursion in transformers. Whenever transformer models like ChatGPT are iteratively inferring the next token in the output, they can attend to, they can pay attention to the previous outputs. This enables them to perform a kind of recursion where they’re reusing a previous result. This is one form of both recursion and computation because we can carry out multiple steps. Another step in the inherent computation is whenever the attention mechanism masks certain ones of the inputs. This is essentially one logical step in a program. And finally, because there are multiple layers for which I am linking an animation below, Each layer is again a logical step in a computation.

18:27 Speaker 1

The behavior that is most obviously missing in this kind of iterative transformer model is that we cannot write to any of the previous outputs. Because of this, there is a limit in the depth of computation that can show up in any output. This is why Chet GPT needs to pad a lot of its output in order to basically use the early output as a form of working memory, enabling it to produce outputs that require more steps that are inherent in the architecture and enabling it to reuse outputs in ways that are not connected in the architecture directly. Before we move on, I just want to point out that stacked restricted Boltzmann machines are not cool in 2024 and transformers will not be cool in 2034. They are just not good enough. Instead of looking at more things that embody pseudo-computation, let’s go ahead and look at a real computer. If we turn to theory of computation, we have the well-studied universal Turing machine. It consists of a read-write head traversing an infinite tape. It has a state register and a control function that determines its next move based on the current state of the state register and the symbol that it’s looking at.

19:24 Speaker 1

If we de-abstract this, we’ll get more familiar concepts like working memory. and the capability to do recursion or iteration, going back and forth between the same locations on the tape. If we turn back to our diagram of spectral reasoning, we have a bag of procedures. These are analogous to strings of symbols that are written on the tape, pieces of programs that we can run. We cannot give it an infinite tape, but we should give it some amount of memory where we can write our inputs to, where it can write its intermediate outputs to, and where we can read the final outputs from. In order to call some of our bag of procedures in response to new inputs, we’re going to need a control function. In its most basic implementation, the control function will call an entry point procedure which is going to decide whether or not we’re going to run further procedures or if we’re going to halt. The result of this procedure is recorded in the working memory, and then the control function just follows the plan. After each cycle of writing outputs, the control function will again call a procedure that’s going to decide: should we keep going and what should we do if so?

20:19 Speaker 1 Because there’s no way to decide if this entry point procedure will ever decide to halt, we do also want to be able to halt externally in response to a signal from the controlling process. The implementation of this entry point procedure should look a lot like a fixed point calculation. We just keep calling the same function with access to all of the intermediate outputs until the result doesn’t change. We have reached the stable fixed point. Using this basic iterative behavior, we can implement things like checking our work. For every single pass, I decide what do I think the output should look like, and then what procedure should I run to try to achieve that output. If the results are coherent with the form that I’m expecting to return, and the entire thing is coherent with the prompt from the user, then I go ahead and return the result. If I can’t reach that state, or I reach another stable fixed point, then I should do something like return an error or ask a question. So there you have it.

21:10 Speaker 1

With a basic recursive structure, we can implement composition by combining the results of previous outputs, we can do flow control by deciding which procedures we want to run, and we can check our work by iterating over our outputs before we emit the final result. For our last requirement, we wanted to have online learning. That means I don’t just want to derive a new theorem, I want that theorem to be reflective in all of my logic, all of my spectral reasoning. Because we have embedded neural networks, there will obviously be some training, but the implementation of that is nothing new, so we’re not going to go over that. The contents of that training can be a little bit more interesting because whenever we downgrade from formal, whenever we decode our formal expressions into natural language, and then we do some chain of thought reasoning using our deductive networks, it should correspond to the proofs that can be done in the formal system. We should be able to imprint to achieve consistency in the informal systems using the synthetic data that we can generate from the formal systems. And the gain should go both ways.

22:07 Speaker 1

We should be increasing consistency overall, and if we want to formalize something else, then we should be able to formalize it faster because we have enhanced our deductive systems to make them incorporate the learnings from the full formalizations. When we find these and when we find predictive power in formal models, those changes should be used as synthetic data to retrain ourselves so that we could reformalize the same things faster. Whenever that generalizes, we will formalize new things faster. Now if we stopped here, we would be talking about online learning using a fixed set of tools. We wouldn’t really have any guarantees of whether knowledge had been obtained formally or informally, and if it was, what kind of process was involved. In order to answer this question, we would have to be able to do introspection, not only looking at the plans that we had written into the working memory, but also reading the procedures that we were using. And this brings us into the funny topic of metalinguage, because we need to use reasoning to look at our own reasoning.

25:34 Limitations

23:05 Speaker 1

Now, metalinguages can be both formal or informal. If I’m using a formal verification of a computer program, that’s a formal metalinguage that’s analyzed the program to verify its correctness. Almost this entire video is using natural metalinguage to talk about various formal and informal languages. In the very same way that we expect deductive reasoning to emerge from finding the dependency in correlations, we can expect metalinguage to emerge by finding the dependent correlations in language. I shouldn’t need to point out that all of the neural networks, all of the procedures, all the programs in spectral reasoning are expressed in programming languages. And if we want to reason about how to construct these programs or how they are constructed, we would just be using metalogic on programs. And if we carry out symbolic or deductive chain of thought reasoning on statements about the programs, we should be able to re-encode those as new programs.

23:58 Speaker 1

One of the limitations is just that we would need to have some exposed information about the runtime so that our program would be able to generate expressions that we could go ahead and evaluate, things that could be incorporated as new procedures within our bag of spectral reasoning. And that is a point that we would expect to be able to feed in all of the information that we already have about machine learning, all of our formal systems, all of our programs, and have the program itself be able to convert that knowledge into decisions about modifying the procedures that are found in its spectral reasoning, enabling us to do things like learn faster and learn more effectively. We’ve already talked a little bit about baking universal computation into the architecture. Once we’re introspecting, this becomes valuable because a Turing machine can simulate any other Turing machine. And this means that with the addition of a model to expose some features of the runtime, our AGI would be capable of inferring how to write programs by looking at the behavior of other programs.

24:56 Speaker 1

While an AGI can’t learn about physics without us talking to the machine, an AGI can learn everything about math, reason, logic, and computer science because all of these are completely abstract fields. Every single bit of the empirical information can be found by just running programs. Universal computation offers a complete window into all of the synthetic data we would ever need to learn any relationship that we would ever want to know about these topics. In the same way that humans can learn to run and jump in the real world, a computer can learn how to program a computer. We’ll get to that in a bit, but first let’s look at some limitations that are not overcome just by changing the architecture. First of all, we’re not moving the boundaries or creating any new complexity classes. We still can’t solve BQP hard problems quickly without a quantum computer, and MP hard problems still take a lot of time. Another limitation that isn’t going anywhere just because we’re using symbolic reasoning is the minimum encoding length.

27:36 Remaining Work

25:53 Speaker 1

Just because we can pack a lot of information into a small area doesn’t mean we can pack an infinite amount of information into a finite area. the lower limit is called Kolmogorov complexity. I’m going to link it below. The relevance here is that while we can do more with less, don’t expect any kind of breakthrough in compression like storing an entire dictionary in 30 bytes of data. Rather, the improvement we should expect is relative to an LLM, where during inference, the data that is relevant to that particular inference is going to be very sparsely distributed. So most of the operations that actually get done are just multiplying zeros and other very small numbers. And finally, undecidable problems like the halting problem are not getting any easier. While our AGI can generate any program, it can’t tell us if an arbitrary program is going to halt on a given an input. It might do something cool like generate some highly accurate heuristics, or use a language that can only express programs that halt. But generally solving the halting problem for all languages will remain impossible.

26:52 Speaker 1

There are lots of undecidable problems. Many of the behaviors we could imagine being implemented in control logic are at best implemented via approximate heuristics. Another extremely important source of undecidable problems and one that’s going to continue creating a lot of work for us is deciding anything that requires empirical knowledge of the universe. We can decide if a mathematical system is consistent, but that’s not the same as deciding if that mathematical system actually represents physical reality. We might have something that’s plausible, that’s consistent, but it has zero predictive power. While theoretical work can and often does point out where do we want to run our next experiments, we need the results of those experiments to decide which consistent explanation is the correct one. So what kinds of work remain to be done? The first is that we have a lot of these components that were actually designed to be used by a human, such as interactive theorem provers. And the reason is because, until now, a lot of the work actually could not be done in a fully automated way.

31:21 Non-Problems

27:50 Speaker 1

The problems themselves could not be stated automatically. And if we want to use these in a context where everything is done automatically, then we are going to have to provide more machine appropriate interfaces and redesign some of these softwares to be used in a fully automatic way. The second thing is that we’re going to have to do a lot of work on a runtime, the design of which is fairly new. The job of the runtime is to orchestrate calling into our bag of procedures in our spectral reasoning. It needs to manage the memory, it needs to call into the entry point procedures, it needs to manage input and output, such as retrieving papers off of archive, and it needs to expose a model of its contents and its behavior so that introspection can occur. One of the more tricky behaviors to maintain is going to be converging towards coherence. We want the runtime to be modular and internally open so that the implementation can continue to be refined from within. But if this becomes divergent, we’re going to need some fallback capabilities to get back on track.

28:48 Speaker 1

There will be a lot of cases where the only consistent signal other than user feedback is going to be an increase in coherence, an increase in the predictive power. We would like to encode a rule that says that observations from the real world win, but how do we decide what those observations are? Once we have a runtime, we need to populate it with sufficient capabilities. We need useful implementations of spectral reasoning. We need to be able to convert formal and natural sentences. We need reasonable control logic implementations. A lot of the initialization work would be what we consider foundation. Regardless of what capabilities we want the program to learn, out of the box it’s going to have to be pretty good at logic in order to be able to learn those things. These kind of foundation behaviors for abstract applications can be trained using nothing but synthetic data, but once we have them trained, we would like to just distribute them because they’re not unique from instance to instance.

29:43 Speaker 1

Another area of likely work will be creating the kinds of networks that we need to implement spectral reasoning. While LLMs can be used to paper over holes and tape things together, they are pretty slow to train and the inference is pretty costly. It may turn out that when we start to decompose a lot of these capabilities instead of having everything in one giant model, that other models are simply more obvious. optimal. Because we are interested in models that implement very simple ideas like inductive reasoning and then looking for dependencies in reasoning, we will probably find different architectures work better in this kind of composition. Changes in topologies can also bring about changes in training methods. That’s going to be true here because we’re prioritizing online learning and the ability to operate in composition. When models are operating in composition, the idea is that we would probably mix and match different models, and so the lifetime of any given model is probably pretty short, which makes us want to prioritize the cost of that model, not the capability.

30:40 Speaker 1

Another area of outstanding work is how are we going to motivate such a program. We can try to give it some persistent instructions, and we can allow it to ask us questions. But how all of this should turn into behavior, especially the behavior that enables convergence to occur, is very much a up in the air. But while things like control logic inherently include a lot of undecidable problems, we first need to get over them and we first encounter them doing the most basic things like teaching a machine to do inductive reasoning. And once we do achieve a little bit of initial operating capability, the idea is to hand the control over to the machine to learn about what things we want to improve. So don’t work too hard on the thing when it’s supposed to work on itself. So things in this picture that are not problems. Precision we get through having deductive formalized systems. Universal computation we get through having the correct topology and architecture. Hardware scale.

34:14 Doing it Wrong

31:32 Speaker 1

This is something that looks like a problem because everybody’s using a giant machine and we’re hearing things like emergent capabilities that happen whenever we increase the parameters over a certain level. What’s really going on is that we found how to achieve a new capability and even though it’s expensive right now and you need a big machine, the value vastly outweighs the cost, and so people are throwing down lots of money without really worrying about the size. This is like the ENIAC phase of computers. We’re after the capability level, and we haven’t gotten to the part where we’re trying to make it small. But we need to remember, for any behavior, there is a lower bound on the size requirement. It’s called Kolmogorov complexity. Depending on the behavior we’re talking about, it’s a lot smaller than 10 billion parameters or 100 million dollars in training time. As mentioned earlier, the calculations done for symbolic reasoning are much more efficient and compact. They represent a lot more information.

32:23 Speaker 1

So when you add it all up, the likelihood that we’re going to keep multiplying lots and lots of zeros with these very sparse designs just to keep NVIDIA’s market cap over $3 trillion is very low. And finally, for people worried about data availability, the internet’s not going anywhere. You can still download every single paper off archive if you want, the data is always increasing in the quantity available and the costs are going to continue to come down. Furthermore, every time we have a more sophisticated model, we can do more with the same data. And when your model has a capacity to do both the theoretical understanding of the data as well as the empirical, then you can get a lot more out of the same data. So what this means is that having a giant data set or an exclusive data set are not going to add up to the kind of value that people are expecting. A small player will be able to generate a small amount of data, combine it with a little bit of theory, and then extrapolate it extremely far.

33:19 Speaker 1

Something that will play a big role in this capability to develop our theoretical capacity is synthetic training data. For any purely abstract system, you can generate infinite true data. An example of an abstract system is a Turing machine. I can simulate a Turing machine on a Turing machine. And this means that I can gather all the data that I need to develop all the awareness that I want about things like computer science or symbolic reasoning without knowing a single thing about the real world. There is a lot of software out there that is already being used in this way. We have automated proof assistants, people are using these kinds of tools and integrating them with LLMs to fine tune models that are good at proving and doing theoretical work. And when you really just need to bootstrap a system, you can make it talk to an LLM until it comes up with its own internal consistency. We don’t need a ton of data and that data is not copyrightable. Just do it.

34:13 Speaker 1

Presuming that our model architectures and use cases do indeed evolve very rapidly, what are some ways that we could do things wrong? First up, we’re in a situation where a lot of the old hard problems are the new easy problems. This affords a lot of opportunity to go out there and hustle and be sure and get these new capabilities into people’s hands. In some cases just making them aware that the capability exists and putting some wrappers around it for convenience. While a lot of these efforts will be successful, at least at raising money, they are very deep on product and very shallow on tech. So if the tech continues to move quickly, then they will also have to continue to move quickly. And there are some cases where that won’t work, such as if your entire product becomes incidental capabilities of somebody else’s more advanced product. Some of this work will show us ideas and show us where the demand is, but a lot of it is just capturing the demand to invest things with the label AI. So let’s say we want to go deep on the tech and theory side and improve and study LLMs.

37:06 Open Source: Part Deux

35:08 Speaker 1

The problem is that the LLM architecture is not the kind of stable core concept like turbofans in the 1970s where we’re going to continue to improve and optimize them for the next 50 years. Stacked restricted Boltzmann machines are not cool in 2024, and transformers will not be cool in 2034. Long before we have squeezed the last drop of water out of the LLM rock, another architecture is going to come along, and it will supersede all of the efforts to improve LLMs. When you can represent data symbolically and carry out symbolic reasoning, who cares about something like grokking and being sure that your approximation has generalized? But probably who is getting it the most wrong, in many ways because they have to, are our major technology companies. A lot of these companies already have massive core revenue streams, and so we can consider them relatively post-growth. What post-growth companies tend to do is invest in acquiring and applying technologies to their existing revenue streams. And they don’t step into new revenue areas that have some kind of logical disconnect with their existing core products.

36:08 Speaker 1

The theory is that they won’t have their big company advantages if they step too far away from their existing distribution and too far away from their existing experience and institutional knowledge. And so it just follows Microsoft will be enhancing email for the enterprise. Apple will be trying to give you an excuse to buy a new phone. Google will be absolutely scrambling to protect search. For the most part Meta seems happy to sit back, developing technologies to keep up with the trends, but then open sourcing them to stir the pot. This makes sense if you consider that their competitive advantage is traditionally in network effects, not in having superior artificial intelligence. By diffusing competitive advantages around AI, they’re able to hedge their bets on their core revenue stream, using brain cancer as a vehicle to sell advertisements. These companies will be leading investment, They will be leading acquisitions, but they will not be leading innovation. It is not why people go to work for these companies. It is not what the shareholders want from these companies. And it is not what these companies have been optimized to do.

37:05 Speaker 1

This is a good opportunity to focus on why this is such a huge and critical moment for supporting open source. Our technology market cap leaders and our hustling startups are mainly focused on doing more of what we’re used to. And while there might be a trillion dollars up for grabs in IT products disrupting other IT products, you should be betting that the lion’s share of the revenue that’s going to materialize is going to appear in completely new innovation. We are already in an environment of revolutionary technologies for curing diseases, creating materials, doing integrated design, and AI is going to make all of these fields accelerate. We’re talking about rapidly developed designer molecules that make most diseases a thing of the past. We’re talking about ways to break down microplastics in our own bodies and ways to create biopolymers so we don’t have the problem in the future anyway. We’re talking about pushing the theoretical limits of solar cells and LEDs and battery chemistry and doing things way faster than we can get them done in a lab.

42:52 Speaker 1

Something that can actually write code that’s pretty good. Something that understands bugs and security issues because it understands strong consistency. And if we can do that, we can propagate this kind of consultative self-improving technology to every other problem. If you want to do this kind of work and you have relevant experience, go ahead and shoot me an email. My closing argument is just do it. Share this with like-minded people and go become one of our GitHub sponsors. Whoa. Not bad, partner. Follow me. I’ll cover you.

38:01 Speaker 1

And yes, we’re also talking about the kind of advanced control systems and reactor design necessary to make fusion power an imminent reality. And this is just listing some of the big things. If every single thing you buy gets better because doing integrated design is faster and cheaper and more effective, then everything you buy off Amazon is going to get better. It’s going to fit the use cases better, it comes to market better, it’s made out of better materials and breaks less. This kind of innovation drives money away from economy that we don’t want to spend money on. Things that we don’t want to exist at all, like type 1 diabetes and cancer. And it drives it into things that we want to enjoy with many fewer trade-offs, like infinite hamburgers with lab-grown meat. You should be thinking, That sounds great, but that’s not where we’re headed right now The competitive environment that exists shares a lot of similarities to some things we saw in the 90s The pressure to make development economically viable to go after more certain revenue that you can cash leads to a lot of self-defeating behaviors and stagnation.

38:59 Speaker 1

People are thinking about things like controlling proprietary data sets or having some little proprietary advantage in their LLM. But compared to the total mass of innovation, all of these smaller scale efforts to create some incremental value and gain some traction in the marketplace are going to lead to a situation like the Unix wars where we had 20 different competing X servers and not one of the closed source options actually won in the end. At the other end of the spectrum, the players that do control tons of revenue will be trying to drive customers back into their own products using their competitive leverage, even while the for value creation completely moves out from under the things they’re telling their customers to buy. But in some ways it’s actually a lot worse than the situation we had in the 90s because now the internet is mature and we have things like user generated data monopolies, we have network effects where the thermal equilibrium favors having everybody on one platform. We have cloud software where you don’t own any of your own data and you can’t run any of your own software on premises.

39:58 Speaker 1

And best of all for the Bay Area, it’s all $13.99 for all 50 services you need. And for things that actually make money, like staying relevant as a developer, it’s $1399. So we have new problems and old problems, and these are not the same old companies either. Google is not 1998 Microsoft, and Microsoft is not 1998 Microsoft. They have competition, and they’re not subject to a giant DOJ anti-trust lawsuit yet. And this all means that while open source could break the logjam, it could enable the technology to flow downstream to where all this value creation opportunity exists, it’s going to require more sophisticated methods. Telling people not to use JavaScript didn’t work for the last 20 years, and it won’t work for the next 10 years. In particular, we cannot continue ignoring the consumer, people who can’t make non-financial contributions to the development of open source, people who can only pay money for the things that they use.

40:54 Speaker 1

And we have to find ways to wrap the ecosystem support bandwidth around these highly successful softwares, even when they are consumer softwares, where we have lots of people who cannot make non-financial contributions. This is the kind of stuff that Positron is building and why we’re out here gaining an audience, building a message, and telling people, yeah, go become one of our GitHub sponsors. Now the way we’re looking at creating a mature solution is by starting off in a niche where we can make contributions directly. That niche is Emacs and it makes a lot of sense because we are users who demand our software to adapt. We want to have control over every aspect of everything. We want it all to be introspectable, to have a consultative relationship with our software. We don’t want any of it to be handed down as a set of weights that we can’t edit or modify to fit our use case. And it’s all built around a very well integrated programming interface with very simple user interfaces that can adapt and modify and wrap around new use cases very quickly.

41:54 Speaker 1

There are a lot of things that could be accomplished with already existing off the shelf technologies like using a vector database in Word2Vec to enable semantic search, to use pre-baked LLMs for translation, to find We can fine tune LLMs for things like detecting bad code in a configuration, or completing arguments, things where we usually have to look in the manual to look through a bunch of plist options and recursively defined argument types. But what we really want is something that’s like MX Jarvis. We want something that lives inside Emacs and watches us work on problems and can also see the tools we’re using to work on the problems. So it’s learning how to automate those tools in the context of the problems we’re trying to solve and it’s even learning how to solve those problems directly itself. So we’re integrating automation into that high gain feedback loop where we get more effective at the problems we’re trying to work on. And for real, with things like automated theorem proving built into the foundation models, we’re expecting something that can actually help us out with real science.

Related-Notes

References

Positron’s Emacs Channel, ed. 2023. Living Programmably. Directed by Positron’s Emacs Channel. https://www.youtube.com/watch?v=vhotxmO3NkQ.
———, ed. 2024. Symbolic AGI: How the Natural Will Build the Formal. Directed by Positron’s Emacs Channel. https://www.youtube.com/watch?v=mQEpPNu_pW8.
NO_ITEM_DATA:openinnovationusing24a
마지막 수정일자