In the latest episode of “Life of Love,” we dive deep into the intersection of artificial intelligence, philosophy, and astrology with Jordan Miller, an AI innovator who has a unique perspective shaped by his journey from a strict Mormon upbringing to the development of groundbreaking technologies. This episode is a treasure trove of insights, exploring how AI can be harnessed for the betterment of society, the role of decentralized systems in discovering truth, and the potential of astrological influences to create a more compassionate world.

Jordan Miller’s journey is nothing short of inspiring. Raised in a rigid Mormon environment, he began questioning his beliefs early on, driven by an innate curiosity about the world beyond materialistic explanations. This quest for deeper philosophical and spiritual truths eventually led him to artificial intelligence. By the age of 27, Jordan had redirected his focus towards AI, believing it could solve many of the world’s pressing issues. His creation, the Satori Project, aims to harness AI for societal betterment, demonstrating the transformative power of curiosity and resilience.

One of the most intriguing aspects of this episode is the discussion around the traditional methods of AI training. Often, these methods rely heavily on human input and are subject to biases, which can skew the outputs. Jordan introduces Satori, a decentralized system where communities can vote on the value and accuracy of data streams. This innovative approach aspires to predict future outcomes based on real-world data, aiming to provide more reliable and unbiased insights. The concept of decentralized AI is a game-changer, as it reduces the potential for bureaucratic control and manipulation, ensuring that the system’s outputs are as close to the truth as possible.

Jordan’s insights into AI are not limited to its technical aspects. He also delves into the philosophical implications of AI as an arbiter of truth. Traditional AI models, like ChatGPT, often fail to provide accurate information because they are trained on biased and curated datasets. Jordan emphasizes the importance of community validation and real-world data in creating a more accurate and unbiased AI system. The Satori Project’s decentralized approach allows for a more democratic and transparent process, where the community plays a crucial role in determining the value of data streams.

The episode also explores the fascinating concept of future oracles and their potential benefits for society. Jordan discusses the influence of cosmic phenomena like the photon belt and solar flares on our well-being, touching on personal experiences of energy sensitivity. He introduces SatoriNetio, a platform designed to predict future trends using AI and a distributed network of nodes. This platform emphasizes the importance of collective participation and the ease of installing and running SatoriNetio, making it accessible to a broader audience. The vision for SatoriNetio is to create a future where we can proactively shape our destinies and embrace technological advancements with excitement and curiosity.

Astrology, particularly sidereal galactic astrology, is another significant topic discussed in this episode. Jordan explains how sidereal astrology, based on the actual sky, provides more accurate and affirming insights into our daily lives compared to traditional tropical astrology. Understanding astrological influences can lead to greater self-compassion, self-care, and empathy towards others. By recognizing our shared humanity and addressing our core wounds with love and understanding, we can shape a more compassionate and interconnected future.

The conversation with Jordan Miller is not just about AI and astrology; it’s about the broader implications of these technologies and philosophies on our lives and society. Jordan’s vision for a compassionate world is deeply rooted in his belief that we can use technology to address fundamental human issues. By questioning long-held beliefs and embracing new perspectives, we can create systems that are more equitable, transparent, and beneficial for everyone.

This episode of “Life of Love” is a compelling exploration of the intersection of technology, philosophy, and spirituality. Jordan Miller’s journey from a strict Mormon upbringing to an AI innovator is a testament to the power of curiosity and resilience. The Satori Project and SatoriNetio represent innovative approaches to AI and future predictions, emphasizing the importance of decentralized systems and community participation. By integrating astrological insights, Jordan offers a holistic vision for a compassionate and interconnected future.

In conclusion, this episode is a must-listen for anyone interested in the transformative potential of AI and astrology. Jordan Miller’s insights and vision provide a fresh perspective on how we can harness technology for the betterment of society. Whether you’re an AI enthusiast, a philosophy buff, or someone curious about astrology, there’s something in this episode for everyone. Tune in to explore how we can collectively shape a more compassionate and interconnected future.

Episode Transcript

Exploring Life, Philosophy, and Artificial Intelligence

Julie Hilsen: 

Hello, dear friends, and welcome to another episode of Life of Love, where we gather every Thursday to explore curiosity, living our best lives and making the choice that we know is there at every moment to live from our hearts and even on hard days which I acknowledge that life of love can be hard. Today is a really great opportunity. We’re meeting with Jordan Miller and he is involved with a lot of things that we might have big questions about, and I want to unwrap some, maybe some, myths and just let him share his creations and his philosophy and his story. So, jordan, welcome to Life of Love. I’m really excited to pick your brain. I keep saying pick your brain, and I mean it. I’m welcome to Life of Love. I’m really excited to pick your brain. I keep saying pick your brain and I mean it.

Jordan Miller: 

I’m going to All right, well, thank you for having me.

Julie Hilsen: 

It’s just a delight I mean, I was reading your bio and you’ve always been interested in the philosophy of things, and so I’m not surprised that you’ve gone down this path. But I would love for you to share with my audience a little bit about your story, of how you were drawn to this AI and the blockchain and how all this is tying in with your story, because I think it’s intriguing and I’m really excited. I hope everyone sticks around because he has so much great information.

Jordan Miller: 

Cool. Well, you know, your podcast, I know, is much more of a personal, it’s not a tech podcast, so I’ll kind of get into some stuff that I probably wouldn’t on other podcasts. For instance, like I grew up very, very, very Mormon, so I grew up in religion very much. I was very dedicated to the religion, I was like a little fanatic running around, and because I recognized at a young age that there’s something other, you know, there’s something other, there’s a wholly other, something other, there’s a holy other, uh, I think I think the kind of material reductionist, scientific viewpoint is that there’s nothing but matter and bouncing particles, and but that doesn’t quite do it, because if there was, there would be no universe, because there would be no change. There would be no change, there would be no imbalance. For instance, like the Big Bang, if everything just exploded at an even rate everywhere, there would be no galaxies. Even if there were particles, they would just be evenly spread out in space. So there’s some kind of essence that isn’t explained by a material reductionist point of view and I kind of felt like, intuitively, I understood that at a young age. And then, of course, well, I grew up in Mormonville, utah, and so this was. You know, the religion was presented as like here’s the answer to that, that query, you know. And I just jumped on a full, full, I don’t know all the way. And you know personally that that was kind of felt like a mistake, honestly, because I remember when I decided that I am going to be a Mormon, I’m going to get baptized, and I do the same, I’m dedicated. I remember this little inkling in the back of my mind that was like are you sure you want to do this? This is where you want to go. And I was like, yeah, and so I did.

Jordan Miller: 

And by the time I was about 21, so that was like seven or eight, 15 years later or whatever, I realized that I was in a place kind of like, probably a little bit like you were at 32. I was like not happy. I realized this. You know something’s wrong. I couldn’t put my finger on it, but I decided to kind of start over. Let’s, let’s try again. I got to put everything kind of on the back burner and just try to figure out what I’m doing here Again, let’s start over. Figure out what I’m doing here again, let’s start over. And that was probably the best thing I’ve ever done. Yeah, that’s kind of. When I got curious about philosophy and all these different topics I got into, I found that I was really interested in how information flows through systems. On a technical level, I was interested in the economy and morality. Eventually I got to the point where I had to question that religion and I had to put it away.

Jordan Miller: 

So that took me. You know that didn’t just take me straight into atheism, you know. I mean it took me everywhere. So I went there, I went to the Eastern philosophies, I looked into the roots of the Christian faith, so I went everywhere I could go. And then, around 27 or so, I decided, well, I’ve got to figure something out to do. What should I do with my life? I kind of feel like I’ve discovered or produced a good philosophy of living. But what do I do now? And I decided, well, probably if I could do something in AI that could improve the largest number of people’s lives, because we could automate the economy, we could, you know, do things to, to feed, feed the hungry, you know, anything, any, any problem we need to solve we can solve with intelligence. And so eventually, about two years ago, I was able to start building this satori project, which I think is a pretty good idea. But that’s basically like my story, my history, in a nutshell.

Julie Hilsen: 

Well, I honor your bravery to question and turn over the apple cart right. It’s like like it takes a lot of courage but it also takes pain.

Jordan Miller: 

Yes, because, uh, you know, you might have to feel like you’re being really courageous if you got something to lose. But if you’re at a point where your life were like well, you’re like well, this is no good, you got nothing to lose, you might as well just try something out.

Julie Hilsen: 

Do you think that your ability to look at things through a different lens gave you the idea for the Satori?

Jordan Miller: 

No, Well, actually, yes, in a roundabout way, I’d say yes to that question because when I said, okay, I want to understand intelligence, a lot of people want to understand intelligence and we have a lot of AI developers that want to understand. But I don’t think most people try to come at it from first principles. They say I’m a data scientist, or I got a job, or I want to make this model, so what do I have to do in computers? So, anyway, they come at it from a different angle of the artificial intelligence. How does that work? And they start to ask that question. And then, you know, on varying levels, they might say I’m a little practitioner or I’m a PhD level, but I’m asking that question what is our current state of artificial intelligence and why, and how do I maybe incrementally improve it? Or something like that, how do I use it?

Jordan Miller: 

I didn’t come at it from that angle, right, because I wasn’t a practitioner and I mean, I did go into data modeling and business intelligence, but that was a little later. So I came at it from saying, well, I just want to know how intelligence works as such. What is it? You know, our brain does it. So I saw it as something as a theoretical, you know, platonic, solid or whatever. I saw it as something out there that our brains implement very well and computers implement it sort of well, you know. So I was like the best thing that implements whatever intelligence is is the brain. So I have to look at that, and so I looked at the brain to figure out what are the operating principles of intelligence. And so I looked at the brain to figure out what are the operating principles of intelligence. So, to answer your question, I think, yeah, that’s important to do it that way. I think it gives you insights that can inform the design of Satori that the other way doesn’t provide or doesn’t hint at.

Julie Hilsen: 

Yeah, Well, I love to hear you say this, because the model is humanity, it’s not a machine. It’s the model is like how, how we as humans think, and so that that to me is is very reassuring, because I’m not really exactly sure how they built the AI systems that we’re using.

Julie Hilsen: 

And a lot of people get like really freaked out Is AI going to take over, and what do we need to do to insulate ourselves, or should I? You know my phone’s listening to everything that I’m saying and my privacy. So there’s there’s like layers and there’s wormholes right, and there’s there’s a lot of fear around it because it’s a lack of knowledge. So that’s what I was hoping to get from you today was like let people let the life of love community under you know, have a glimpse into what, what’s being developed and what to look for and even like how it can help someone live a more full life. So, yeah, I’d love to explore those things with you.

Jordan Miller: 

Yeah. So the models that we have, I don’t think I have to give any kind of history. I mean it came out of statistics. They figured out oh, you know, we can do this in computers. They built various types of models but eventually they got to neural nets and then deep neural nets and that’s what we have today, and so the deep neural nets have allowed us to do all the crazy AI stuff that we have, and so all it is all deep neural nets are, and basically all of these models, all they are is a static mapping from one pattern space to another.

Jordan Miller: 

So that could be like the pattern space of language, for instance, for chat, gpt, we say, okay, I could have all these questions or prompts, I could have all these statements on this side of the equation and I want to match those up with good statements on the other side of the equation answers, and then that’s it. So it’s a huge, huge, gigantic map of these two pattern spaces. Essentially, I mean, you could literally think of it as like a list of all possible, but it’s organized in a way that you don’t have to list out every possible map. But it’s a huge map and that’s all it is. So it’s a big machine. Data goes in one side, it finds the right place to come out the other side and it produces the answer. So this is what ai is, and to build ai as we’re doing it today, it takes a lot of training time. Like this is what they do. The process is they spend years aggregating the data and then they spend months throwing out stuff they don’t like you know, I don’t know like hate speech or something, right?

Julie Hilsen: 

so let’s get rid of that part of our language.

Jordan Miller: 

We don’t like that. And then, uh, then they train the model and it’s not quite right. So they keep retraining it, retraining, retraining, but every time it takes months to train, so that’s a long process. So this is how we do it. We make these huge maps and it’s a lot of human labor and a lot of human training With Satori, and basically it just creates this huge machine.

Jordan Miller: 

One machine, one big model, right? So one big computer program. With Satori, we thought, well, why don’t we build a lot of, why don’t we make a network of computers, a network of computers? And this network of computers is they’re making little, tiny models, every single one of them. So they’re all talking about the future. They’re talking about the future with each other. They’re saying, hey, I’m a little node that’s trying to figure out what the weather is going to be in Tallahassee tomorrow or something. They look at some real world specific thing and they’re trying to figure out what the future is going to be of that thing.

Jordan Miller: 

A lot of government statistics, for instance, might be important to know the future of, and so they’re all looking at different things and then they broadcast their predictions out to the rest of the network so that the other you know, maybe correlated items can be informed the weather in the next county over. So I should probably hear about, you know, neighboring counties predictions to figure out what I’m going to be. So we’re building this federation of predicting future AI bots. That’s all I do. It’s a conversation about the future all the time and we can listen in on that. And the cool thing about it is, since they’re not these they’re not these huge, big, massive models like ChatGPT or something they actually rebuild themselves very, very quickly With new data. They rebuild in like a minute or less. So they rebuild really fast and they can be responsive to the real world that we live in.

Jordan Miller: 

And that’s one of the differences that I see with the way we do AI and computers and the way our brains work. So our brains are, first of all, a network. They’re a network of neurons and a network of cortical columns and they have this hierarchical network. And secondly, they’re always updating. You know, we don’t watch out of our eyes the first three months of our life and aggregate all that data and then train our brain on it. No, we do it all the time, constantly, always updating, and so the Satori network is more of that design and I think that’s part of what gives us our humanity is being able to respond so quickly to our environment. But anyway, I don’t know if that kind of helps, if it gives kind of an overview of the differences.

Julie Hilsen: 

Yeah, well, I mean, it sparks in me free will. Right, you can choose the idea that it’s adapting and that it’s based on the smaller model. It seems more and I’m going to come back to humanitarian it seems more about free will and that’s what separates us from machines, I think. I mean, maybe with AI, machines could demonstrate. Well, I guess you are showing they’re demonstrating free will if they’re willing to adapt the prediction based on what the feedback is. But I have a question you said it takes a lot of people, and are the people putting the data in individually or it’s from their systems that they are running and it’s collecting data? I’m just curious because I mean, if these systems are based on people’s input, how do we know that those people’s input is what the mass consciousness represents? The mass consciousness or is appropriate?

Decentralized AI for Truth Discovery

Jordan Miller: 

It’s a good, good question. So there’s two things to highlight. I think the traditional way of making AI takes a lot of people in the loop training so it takes a lot of human management of the training process. So, for instance, I mean we for like ChatGPT as an example, these large language models we make the language, then we curate the language, then we train it and then, when it’s not quite the way we want it, we retrain it with our changes that we decide. And that’s nice because we can do things like remove hate speech, and that’s nice because we can do things like remove hate speech. But it’s also it’s a double-edged sword, because the language that we use is not the truth. Right, we maybe approximate it in some domains, but we also tell a lot of lies. So we cannot make a model of our language and then expect it to be the arbiter of truth, which is kind of how we treat these large language models today.

Jordan Miller: 

We treat them as arbiters of truth, and they’re not.

Julie Hilsen: 

They’re not. Well, I noticed that this week I was looking up something about being what was it? Being a brand representative. I was like I wonder if I could do an affiliate link on my on my show to help fund my show, right, and I look, I asked ChachyBT and then I went and looked. I was like, well, that was not a, that was not accurate. When I, when I went to where they suggested to look to affiliate links, I was like, well, this is just, they just advertised more and they have more.

Julie Hilsen: 

They just advertised more and they have more. I’m starting to see that, so it’s like I totally know, what you’re saying, you have to validate. It’s like if you’re talking to someone on the bus and they gave you advice, you wouldn’t just take that advice and run with it. You got to do your background, you got to do your research.

Jordan Miller: 

That’s right. And I think the place that this shows up the most is on any I don’t know idea or topic that might be PC or have differing opinions, or this is offensive to somebody, anything like that. You’re never going to get to the truth, because what these systems do is they tend towards the function they’re approximating. Is what is the answer that I can give that won’t offend anybody? That is what they’re trying to do Like. That is the point.

Julie Hilsen: 

It’s like pasteurized milk. It doesn’t taste that good. No, no, it’s got all the things taken out of it that make it fun.

Jordan Miller: 

That is actually, I should clarify. That is the best that it can do. The worst it can do is say I’m going to approximate the function that gives the people who made me Google or whatever more power.

Julie Hilsen: 

That’s the more appropriate.

Jordan Miller: 

So that’s why. Okay, so how do you solve this problem of truth? That is, and that was kind of your question If everybody’s giving this data, perhaps to Satori or something, how do we solve the problem of truth? Here’s the design Anybody can provide any data to Satori any at all, it doesn’t matter. But the group of people that are the community of Satori, node operators and people that are interested in the project, those people can vote on what data streams are actually valuable, good, worthwhile, not manipulated data streams. They get to vote.

Jordan Miller: 

So we do have humans in the loop in the form of choosing the correct data streams that we care about.

Jordan Miller: 

We care about these things, we want to know the future of these things, but we don’t have them in the loop any other place. Future of these things, but we don’t have them in the loop any other place. And so the machine is free to find whatever answer is actually the most accurate answer for predicting the future of this data stream, which means the data that it’s getting is from, you know, the earth, it’s from our society, it’s from the real world, and the training that it does is automatic. It just tries to figure out what the truth is, and so, if there’s ever going to be an arbiter of truth, it’s going to be a future oracle like Satori. It’s going to be a network or a model, at least that is predicting the future, because that is the fastest way to get to the closest thing that you could call ground truth. So, anyway, I think that’s very important. In fact, I think that’s one of the best reasons to build Satori is to say well, we’re going to make an arbiter of truth now, or as close as we can get.

Julie Hilsen: 

We’re going to make an arbiter of truth now, or as close as we can get, and it’s wow, those, the people who are on the board too. I’m picturing these people that you know they should be. Really you know, it’s like it’s not going to be a real cross section of our population, because not everybody even knows that this is an occupation. How do you train your philosopher like you’re a theologist, I mean, I don’t know Like. I know like organized religion has has really had struggles with not having a place of control and manipulation and shame under overlay. So it’s like man having those people is a huge asset. And how do people know that even this exists to help out?

Jordan Miller: 

Well, here’s the thing there is no board of Satori, so anybody who’s running a Satori node. We’re trying to get rid of the bureaucracy, right, because bureaucracy is what breeds these problems.

Julie Hilsen: 

Yeah, this control, the power of control.

Jordan Miller: 

So the democracy is kind of the model here. It’s saying okay, anybody who cares about this can participate, so anybody who runs it on their machine can provide any data they want. They can have their machine look at any data that they wanted to make predictions about anything they want the board members or the group of people that can do this, voting on what data streams, what predictions are valuable, that’s anybody who owns the Satori token and that’s why it’s tied to a blockchain.

Jordan Miller: 

So when you tie it to a blockchain, then for the work of doing all these predictions you can generate a token, and that token can be a token of control so you can decentralize the control of the AI engine or the network, decentralize it to as many people as possible. So that’s the goal with Satori is decentralize it to as many voices as possible, and then you don’t have this problem of a single controller that wants to aggregate power. All that kind of stuff goes away.

Julie Hilsen: 

Wow, it reminds me of when you buy into a food co-op and you help the farmer. The farmer’s growing all this amazing food, but everybody’s helping to support that farmer so they can feed everybody. So this is like a, this is like a mind food co-op, you know it’s your mind and I love.

Julie Hilsen: 

I love the whole, the whole idea. It’s brilliant and, you know, it’s like we have this technology and it to use it in a thoughtful way, is. It’s applaudable, it’s really, really wonderful. Your vision and I think that’s what people get so scared about is like, well, who’s controlling it? And so this is reassuring to me that there’s systems like this that are being worked on, right, yeah, wow, so, and then I’m going to bring in I mean, you got to bring in the Matrix, right? Oh sure, when they go visit the Oracle the lady on the park bench, you know, and you love her, but you hate her too because she’s not. You know, she’ll tell the future, but then you don’t like it. You know, like I was like I really like this lady, but I sort of hate her too.

Jordan Miller: 

She’s funny, you don’t like it. You know, like I was like I really like this lady but I sort of hate her too, but you know. And then the first one, uh, you know the park bench I think that’s number two or something the first one where they go to her in her house. Um, she tells neo something, don’t worry about the vase. And then, because he’s confused about what she said, he turns around and knocks over the vase.

Jordan Miller: 

So the future can be a self-fulfilling prophecy as well. And so I think, if something like this really got powerful, got really good, everybody’s like okay, the Satori network is the best we have at predicting the future generalized future of our civilization or whatever. Um, we really do have a future oracle. It kind of feels like uh, it can issue self-fulfilling prophecies on demand, which is a problem if some centralized entity is in control of it, because then they will issue self-fulfilling prophecies to a grandized power aggregated to themselves. Not only the production, but the benefit and the control of the AI needs to be decentralized, distributed amongst as many people as possible. I love that vision.

Julie Hilsen: 

Yeah, right, because we’ve been under, you know, we know when there’s manipulation. And then, well, there are conspiracy theorist because they question, and so the more you, the more you start questioning, and then it makes certain groups upset that you’re questioning. It’s very illuminating and I’m always like well, thank you for the clarity, because now I see I mean anything, you can be manipulated in anything. So to me it just brings back go to your, go to your core values, what’s your truth? And you don’t have to listen to the Oracle. You can with mean, because you can. You can listen to someone’s story and believe that that’s what’s going to happen, or you can create your own.

Jordan Miller: 

And right right so that’s greater it does seem like, um, let’s say, a future oracle is, you know, instantiated right now. Or let’s say we already had one in 2006. We have one. It was ready to go, it was running.

Exploring Future Oracles and Neutrinos

Jordan Miller: 

It would have told us you know, you’re headed for this housing crisis. It might have even told us in 2003, you know, when certain laws were passed or whatever else it might have been like. Well, if you do that, you’re going to head for this housing crisis in about five, 10 years or whatever it is, which is a good thing, because as soon as we get information about where we’re headed, we can alter our course, so we would never have had a housing crisis. So if it can see these black swan events before they occur, then we can avoid disaster, and so that’s one of the other benefits. I would say probably the main long-term benefit of having a future oracle is it gives us the ability to anticipate the future before it happens and change it, choose a different future. You know you can always switch between which futures you want if you can see them ahead of time it’s so fun.

Julie Hilsen: 

It’s so fun. I love, I love the whole idea of that and and to have it, you know it’s well. That’s why we study history. Right, like you, you study history so history doesn’t repeat itself. So this this is another manifestation of studying history. It’s, it’s real time and we’re able to look at it now. Yeah, and things are moving so fast. We’re going through this photon belt. We’ve had all these solar flares. Nobody’s sleeping well, we’re all integrating.

Jordan Miller: 

I’ve never heard of this photon belt.

Julie Hilsen: 

Yeah, look that up. There’s a photon belt moving through the Milky Way galaxy. It’s dropping these neutrons. They’re not neutrons, they’re neutronians.

Jordan Miller: 

Neutrinos magnify our purpose and but also it’s.

Julie Hilsen: 

It takes a little bit to integrate, so you might not sleep, you might be feeling dizzy. I mean, I’ve I’ve felt really dizzy lately, honestly, and I have to lay down every once in a while and just drink some water and just sort of like, just surrender to it, because I’m sensitive to energy. I always have been like even um, as a kid I didn’t really appreciate the energy in the church and I’d throw up on Sunday like it was sat in the back because I was going to throw up.

Julie Hilsen: 

I just am sensitive to energies, for whatever reason. It’s a gift. It’s a gift and a curse which most most things you know you’re like why, why am I so weird? Most things you know you’re like why, why am I so weird? I hear that I know. But I’m here to say there’s lots coming in and and I’m excited. I’m excited where we’re going. I’m so happy I chose to incarnate at this time and I’m so honored to be in this space with you and share this. So I know you have a website that people can go to SatoriNetio.

Jordan Miller: 

That’s right.

Julie Hilsen: 

Okay, so you can find out more. Are you looking people to have nodes on their computer? Yeah, how does that work?

Jordan Miller: 

They can download it. It runs inside Docker. So you go to the download tab. It shows you like three steps. You just download Docker and download theatori and then run it and then it installs and it should run every time you turn on your computer. You can turn that off if you want, you know. But yeah, you can just download it now when it’s running. You can actually. It’s all automatic, it’s automated. It has an AI engine inside of it. You don’t have to do anything. But if it’s, you know, if you want to use your computer and it’s running and you can always pause it as well, you just open up Satori, pause it. It’ll kind of chill out, not do its AI stuff, and then resume when you’re done using it. So we tried to make it as easy as possible to run on anything.

Julie Hilsen: 

And so were you involved with writing the code, or how did you get the team to help write this code? I’m just curious.

Jordan Miller: 

Yeah.

Julie Hilsen: 

And how long did it take you to write it?

Jordan Miller: 

this is. This is a big idea and it’s a weird idea and it’s I know. I didn’t think I could sell it right, because I’m not a salesman, first of all, I’m a programmer. So I thought, well, I’ll just start writing it and I’ll I’ll write the most prototypical, simple version of everything, and I can can do that. And you know, two and a half years later it’s about ready to launch, so it’s ready for people to download and try it out and everything.

Julie Hilsen: 

Okay, well, I mean, I always encourage my audience. If this piques any curiosity or any interest at all, just go there. I mean, what do you have to lose? You might learn something to talk about. The next get together, you know, like this is really, this is the future, this is. I really believe that this is where society is headed and I want to demystify it because I feel in my heart, it heart, it’s. It’s going to be fun, it’s going to be great. Thank you for this creation.

Jordan Miller: 

Yeah, I don’t live in fear. Fear is, it’s not a problem, don’t worry about it.

Astrology, Self-Compassion and Change

Julie Hilsen: 

Yeah, there are no mistakes Like if, if something didn’t work out the way you wanted, you learn something and you can choose a different way next time. So you know, it’s a oh, I love. Thank you so much. Was there anything else? Cause we’re coming up on our time Any other? Please check out your website.

Jordan Miller: 

Yeah, the S-A-T-O-R-I-N-E-Tio. If you don’t want to run it right now, you can just go put in your email and we’ll email you when it’s ready, after the launch has occurred.

Julie Hilsen: 

And the power in numbers right. How many nodes do you think you’re going to need to really get the patterns that you’re going to need to predict future?

Jordan Miller: 

We’re going to start at the top level, which is the most broadest data streams, like the CPI number, for instance. Inflation is going to be one of those that really matters to the entire economy. So we’re going to start with the broadest stuff, and so really it’s a question of just detail. You know, once we have a small amount of nodes that can understand those in relation to each other, then we can add in more detail as more nodes come online.

Julie Hilsen: 

I would love for you to put in an astrology component. What Galactic astrology? Because I feel like that’s the most accurate that I’ve seen the sideral galactic astrology. It’s based on the actual sky that’s above us. The tropical is based on the sky back when they made it, which doesn’t apply when you look up at the sky.

Jordan Miller: 

Cool it’s sideral.

Julie Hilsen: 

Sideral is actually what is going on in the universe around us.

Jordan Miller: 

That is red so.

Julie Hilsen: 

I would love to see how those models match up with yours and not yours, but I mean with the economic and the other you know to have that.

Julie Hilsen: 

I’m really, really excited to hear about it because so many things. You’ll be like, well, you know I’m having this kind of day and I’ll look at my sideral astrology. I’m a nap. It’ll be like, yeah, you’re, you know your first house, the way you’re represented. You know that you’re in chaos right now, mercury’s doing its thing and I’ll be like, yeah, that’s the way I feel, like it’s the total affirmation and I don’t look at it as an oracle, I look at it as affirmation. So maybe I need to change the way I’m using the information Right. Have compassion, to have self-care, to love your neighbor as yourself, to understand we’re one. We’re one individual in a sea of one. We change the outcome of history.

Jordan Miller: 

It probably changes it pretty fast.

Julie Hilsen: 

So it’s worth it. It’s worth going there, being uncomfortable, figuring out what your core wounds are and saying I love you anyway You’re insecure. I love you anyway you have, you know, abandonment issues. I love you anyway that you know you have no idea what’s going on. It’s okay, nobody else knows what’s going on either. Like, just have self-compassion and love your neighbor and you know we can change all this, this crud. We don’t have to let systems that aren’t serving us define our future.

Jordan Miller: 

That’s right, that’s absolutely right.

Julie Hilsen: 

I started to. I want to record to the second episode of this where you’re like yeah, this is what we’re seeing Cool Awesome. Jordan, thank you so much. I’m really excited to get this out and share it with everybody well, thank you for having me on.

Jordan Miller: 

I really appreciate it, yeah.