AlwaysHere AI - An Interview with Josh Rosenfeld by Bennett Davishoff
Josh Rosenfeld
Bennett: What is the core mission of the AlwaysHere program, and how does AI enable it?
Josh: The mission of AlwaysHere is to provide a comforting voice for neurodivergent folks day and night. The program couldn’t be done without AI. I use AI to clone the person’s voice and brains of the chatbot. Each chatbot is using a patent pending process that knows their routines.
Bennett: Tell me a little about yourself
Josh: My name is Josh Rosenfield, 53, went to Michigan State University, Programming computers since 1977. Had the great fortune to work in many jobs including such companies as Apple, Walgreens, and Zillow to name a few. As part of those jobs, I created hundreds of millions of user experiences around the globe. Most recently, I’ve been teaching about how to make AI chatbots for businesses. One thing I learned over the past few years was that conversational AI is incredibly powerful. Earlier this year, the mother of one of my students asked me to create a voice chatbot that sounded just like her to comfort and support her son when she’s not available. And this resulted in me creating AlwaysHere. I met a woman who runs one of the largest neurodivergent communities in Illinois and it made me realize that there really is a need for AI in this particular field.
Bennett: Who are the intended users of AlwaysHere, and what needs does it aim to address?
Josh: The intended users are parents and caregivers of neurodivergent people.
Bennett: How does AlwaysHere differentiate itself from other AI-based companionship or support tools?
Josh: Each chatbot is created by interviewing each caregiver individually. Using our back-end technology, we create one of a kind advanced AI voice companions that specifically cater to this group. I’ve been informed by people in the industry that a program like AlwaysHere hasn’t existed until now. A year ago, the technology wasn’t there and was not good enough.
Bennett: What safeguards are built into AlwaysHere to prevent dependency or over-reliance on AI?
Josh: The account is limited to 180 mins a month. You can get more minutes but that’s up to the caregiver. The other guardrails are the prompts themselves that power the chatbots. In addition, after every conversation, the parent gets a transcript and an AI summary emailed to them.
Bennett: How does AlwaysHere handle situations where human judgment or intervention is required?
Josh: AlwaysHere immediately calls the emergency contact. So that's how a human intervenes with the AI and calls the caregiver.
Ethical & Social Implications
Bennett: What are the potential benefits of AlwaysHere for communities that lack access to traditional support systems?
Josh: The program is low-cost and instantly available compared to traditional services. Plus, it is leveraging AI for a whole new group of people.
Bennett: How do you address concerns about emotional attachment to AI within AlwaysHere?
Josh: That’s up to the caregiver. I try to provide a realistic companion. How real people will use this platform, I’m slightly unsure of because nothing like this has ever existed until now.
Bennett: Could programs like AlwaysHere replace or diminish human-to-human connections?
Josh: I think it probably will. I feel this is the future!
Bennett: How is bias in AI decision-making addressed in AlwaysHere’s design?
Josh: I don’t know if this directly applies to AlwaysHere, but I believe AI is getting safer every day. I always try to use the best, safest technology I can, but in the end this is something that the parents have to monitor.
Bennett: What measures are in place to ensure AlwaysHere respects privacy and confidentiality?
Josh: We don’t share our data nor expose our conversations. Everything is secure including API calls As soon as I get more people interested, AlwaysHere will be available for clinical uses and have HIPPA compliance.
Technical & Design Questions
Bennett: How does AlwaysHere learn and adapt to individual users over time?
Josh: AlwaysHere stores memories in a separate database so that they are fed conversations. It has its own memory and parents can update the app when needed to so the chatbot is aware of things that change in their life.
Bennett: What challenges have you faced in training AlwaysHere to respond appropriately in sensitive or high-stakes contexts?
Josh: AI requires a ton of testing. One of the greater challenges is the lack of funding so that I can create more advanced AI testing tools. I’m completely self-funded.
Bennett: How does AlwaysHere balance personalization with user safety?
Josh: The guardrails are separate from the personalization. I don’t mix the two together
Future & Broader Impacts
Bennett: In what ways do you see AlwaysHere shaping the future of human–AI relationships?
Josh: I see it as revolutionary! This can help people all around the world. This is the future. I’m an evangelist for this stuff. I live and breathe it.
Bennett: How might AlwaysHere influence the way we think about caregiving, companionship, or therapy?
Josh: I think that it’s a whole new way to look at the world through artificial intelligence. This is the future.
Bennett: Could AlwaysHere contribute to reshaping legal or policy standards around AI accountability?
Josh: Definitely! How humans interact with chatbots is new. Our findings can help to shape policy and standards for everyone.
Bennett: What’s your long-term vision for AlwaysHere in society?
Josh: My long-term vision is for AlwaysHere to be available to everybody from children to adults to seniors in any language for any country.