Hacker Newsnew | past | comments | ask | show | jobs | submit | calvinfo's commentslogin

A couple of things that really help...

Matrices isn't editable (sort of by design) and works best for columns that are all of the same type. It then uses arrow for fast in-memory analytics.

Google sheets has to serve a much broader array of use cases, so I think they can only do so much to improve the performance. It can't always rely on having consistent rows and columns.


Thanks!

Domain was an opportune buy off Flippa. It wasn't really being used for anything and the seller wanted to get rid of it.

> What are you using for the charts? 100k points on a scatter is pretty impressive with SVG.

Visx/SVG right now. You're right that 100k points gets slow on a scatter plot, for those charts in particular some sampling occurs. At some point I'd like to investigate other options, but this works for the MVP.


The colder it gets, the lower the efficiency. In a more extreme climate like Fairbanks, the efficiency pretty much drops down to resistive heating (1.0).

If you're set on electrifying there, it might make more sense to investigate a ground-source heat pump which leverages the ambient underground temperature for heat exchange.


Big plus to ground-source here! Everything shown on the site today is air-source only.


Thanks for giving it a try!

We've tried to make this a little clearer in the "receipt view", but the major difference is in terms of cost of the hardware.

Furnaces, heat pumps, and A/C units all have a 15-20 year useful lifetime.

While the annual savings will be only a bit less, the hardware cost will be _significantly_ less because you only need a one unit for a heat pump vs two for an A/C + furnace combo.

Does that make more sense? We'll try and clarify in the UI


If you already have an A/C a furnace is very very cheap, so cheap it's almost free (i.e. included with the A/C).

I don't see how you can possibly save $37,000 just by getting rid of the furnace - they just don't cost that much.


Author here. I'm sad to hear this and want to understand how we can do better.

Do you mind following up over email? I'm calvin at segment.


I don't know a ton about the embedded/IOT world, but it's an area that has been coming up more on my radar recently. This project seems like a super interesting solution to the problems of deployment, updates, and monitoring.

Could you talk a bit more about the focus on linux? How much of the ecosystem is running a full OS vs a small embedded program?


To some extent, it's a matter of ease.

For embedded systems, you'd have to include the command and control elements in, which then gets into concerns about the amount of memory (both RAM and flash consumed).

For example, the ESP* devices historically have had challenges because you don't have enough flash storage to hold a proper SSL anchor chain. Suffice it to say there is a LOT of nuance when working in the embedded world which vanishes if you (even if just for the purposes of minimal viable product) focus on a full OS.


The short answer, agreeing with @crtlaltdel, is that Linux is getting really popular for embedded/IoT use cases. A lot of "embedded" devices are starting to look more and more like real servers. The latest model of Raspberry Pis go up to 4gb of RAM!

Linux has been standard in the cloud for a long time and as a result the tooling is pretty mature now. Projects like Kubernetes are great for managing servers in the cloud and have become quite standard. We noticed the same level of tooling for running Linux on devices just wasn't there yet, and so we decided to try and fix that!

That being said, there are certainly use cases where Linux isn't the best choice. If you have realtime requirements or you need low power usage then using an RTOS is a better option.


linux has become quite common in iot, which has really expanded what a lot of people consider an “embedded system”. we built a platform on a custom BSP for OpenWRT for industrial applications where an rtos or plc wasn’t needed.


Disclaimer: I’m a cofounder of Segment [1], we build a product to help with these problems.

Given what you’ve shared here, it sounds less like your problems are related to scaling for data volume, and more related to all of the complexity that comes with a data pipeline. Instead of adding a bunch of new components, it sounds like you need just a few.

My concrete advice:

- Standardize and document the collection point for your data. Create a tracking plan which documents how data is generated. Have an API or Libraries which enforce the schema you want. If the sources of data are inconsistent, it’s going to be hard to link them together over time. - Load all of the raw (but formatted) data onto S3 into a consistent format. This can be your long term base to start building a pipeline. And the source for loading data into a warehouse. - Load that data into BigQuery (or potentially Postgres) for interactive querying of the raw data. For your dataset, the cost will be totally insignificant and results should give your analysts a way to explore your data from the consistent base. - Have a set of airflow jobs which take that raw data and create normalized views in your database. Internally we call these “Golden” reports, and they are a more approachable means of querying your data for the questions you might ask all the time. The key is that these are built off the same raw data as the interactive queries.

We use Segment to manage all of the top three bullets (collect consistently, load into S3, load into a warehouse). Then we use airflow to create the golden reports that analysts query via Mode and Tableau. As other commenters have mentioned, there are a number of tools to do this (Stitch, Glue, Dataflow), but the key is getting consistency and a shared understanding of how data flows through your system.

This is a pattern we’ve started to see hundreds of customers converge on: a single collection API that pipes to object storage that is loaded into a warehouse for interactive queries. Custom pipelines are built with spark and Hadoop on this dataset, coordinated with airflow.

[1]: https://segment.com


We originally moved to Boston to be close to universities. It seemed like a better location for convincing our professors to use the edtech product we'd started with.

We made the move back to SF to be closer to our customers (since we had shifted to building an analytics product by then). It was much easier to walk them through the product in-person.

I wouldn't say this is strictly necessary today, but SF does have a nice density of startups if you are building a developer tool.


Yeah, Boston has always felt like a great place for a startup/ young company with all of those universities on the east coast or like, blocks away in Cambridge.

Makes sense though that you wanted to be near your customer base.


Thank you! I'd agree with the other commenters that Peter's talk is the best place to go for hearing the in-depth story: https://blog.ycombinator.com/peter-reinhardt-on-finding-prod...

I'd say more generally that finding product-market-fit felt much more like a 'pull' than a 'push' motion. We had been trying for 8 months to convince even a single user to rely on the product we were building earlier, and it just wasn't sticking.

When we launched today's product, we started seeing a lot more pull from customers. We solved one problem that other products did not, which prompted a bunch more requests from customers.


Thanks, fixed!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: