Hacker Newsnew | past | comments | ask | show | jobs | submit | ebspelman's commentslogin

I work on an app called Polycam, and we have an automated Room / floorplan capture mode. We actually announced some updates to it earlier this week:

https://twitter.com/Polycam3D/status/1623730477637959680


Love Polycam! My wife just got the newest iPhone this fall before our remodel started and I was stoked to capture our house before the remodel and cannot wait to get the after all scanned in for comparison. https://poly.cam/capture/11B48FCD-E6A5-4ED7-9C9D-BF5DFE60579...


I love the app! The interior designer and property agent who saw me using it were pretty blown away.

One thing I struggle with though is making sure that the capture is robust enough before I move on. I usually only really know that something is off when the capture is done already. Can I “touch up” an existing capture by doing another local scan of a problematic area?


You can - there's an 'Extend' button for LiDAR captures, but it can be a little touch-and-go sometimes. Definitely an area for us to work on more!


thanks, this is super cool. Does it have decent geometry export options?


Yep! For meshes we've got GLTF, OBJ, FBX, STL, and DAE. And then you can also export the 2D floorplan as SVG and DXF.


politely but firmly (with all my soul) disagree.


It's the level of sweetness and crispness I'm troubled with most of the time. Braeburns are crispier and are better balanced at sweetness (neiter sour nor too sweet) IMO.


Even though current VR/AR interfaces are completely useless for Excel or other data-management tasks (no keyboard support, unreliable controls, lack of development interest), I think in the future there are more embodied / spatial treatments of data access that could feel like an improvement on '2D' Excel.

As the author mentions, the original moniker for Excel was VisiCalc - a visual calculator. There's no inherent reason why 3D spatial representation would be a worse medium for a calculator.


> There's no inherent reason why 3D spatial representation would be a worse medium for a calculator.

The main inherent reason why 3D isn't as great as it seems is that human vision can't see through solids. We don't perceive an entire 3D volume, we just perceive the part of its surface that faces us. We can obviously get more information from stereoscopic vision compared to 2D, but it's not a full other dimension of complete volumetric data. We mostly see a 2D surface with some depth information.


> As the author mentions, the original moniker for Excel was VisiCalc

VisiCalc wasn't actually a moniker for Excel. It was a predecessor. It was the first spreadsheet program, which was made by a different company, VisiCorp, and released in 1979. Excel was developed by Microsoft and released in 1985. Prior to Excel, Microsoft had released an earlier spreadsheet called Multiplan in 1982.


True. Also Excel was a clone of Lotus 1-2-3, not VisiCalc, as it copied its macro language too.


There has been a lot of work looking into 3D visualization, but it really seems like the benefits are pretty minimal compared to the drawbacks. Even 2D visualizations seem to do better when limited to a single spatial dimension for carrying information (i.e., how pie charts are inferior bar charts in almost every way).


I should disclaim that I work on a 3D capture app called Polycam, but in that work I've grown used to the idea that 3D captures are inherently better at conveying some kinds of visual information than photographs are. Like a room with graffiti on the walls. The opposite is also true - 2D photos are way better at sunsets & portraits.

So I guess what I'm saying is that I'd bet there are some undiscovered cases where 3D is going to be better for data representation / manipulation.


That's a great point. I had a VR headset that worked with phone a few years ago, and it was absolutely incredible how 3D still images have the ability to make you feel like you are someplace that you are not, at least compared to 2D still images and video. 3D will certainly give designers more tools to work with in making memorable visualizations which is an important feature for many visualizations.


If the argument is that you need the third dimension to reflect of the shape of the data, you're not going to want to stop at three dimensions when working with stuff like multi-dimensional tensors for machine learning, etc. So any 3D display system will have the same problem displaying a 4D grid as a 2D display system has displaying a 3D grid.

Of course any >2D spreadsheet or data viewing / editing / programming language (i.e. Python / Numpy / TensorFlow / Dwarf Fortress / Minecraft / etc) needs to project and slice high dimensional data onto the 2D screen somehow, because displays and human retinas are 2D by nature.

But if it's a practical question of optimizing for human perception (retinas are 2D), engineering (screens are 2D), usability (you can't see or click on something that's hidden behind something else), and user interface design, then 2D wins hands down over 3D.

Dave Ackley, who developed the Moveable Feast Machine, had some interesting thoughts about moving from 2D to 3D grids of cells, suggesting finite layering in z (depth), but unlimited scaling in x and y (2D grid):

https://news.ycombinator.com/item?id=21131468

DonHopkins on Oct 1, 2019 | parent | context | favorite | on: Wolfram Rule 30 Prizes

Very beautiful and artistically rendered! Those would make great fireworks and weapons in Minecraft! From a different engineering perspective, Dave Ackley had some interesting things to say about the difficulties of going from 2D to 3D, which I quoted in an earlier discussion about visual programming:

https://news.ycombinator.com/item?id=18497585

David Ackley, who developed the two-dimensional CA-like "Moveable Feast Machine" architecture for "Robust First Computing", touched on moving from 2D to 3D in his retirement talk:

https://youtu.be/YtzKgTxtVH8?t=3780

"Well 3D is the number one question. And my answer is, depending on what mood I'm in, we need to crawl before we fly."

"Or I say, I need to actually preserve one dimension to build the thing and fix it. Imagine if you had a three-dimensional computer, how you can actually fix something in the middle of it? It's going to be a bit of a challenge."

"So fundamentally, I'm just keeping the third dimension in my back pocket, to do other engineering. I think it would be relatively easy to imagine taking a 2D model like this, and having a finite number of layers of it, sort of a 2.1D model, where there would be a little local communication up and down, and then it was indefinitely scalable in two dimensions."

"And I think that might in fact be quite powerful. Beyond that you think about things like what about wrap-around torus connectivity rooowaaah, non-euclidian dwooraaah, aaah uuh, they say you can do that if you want, but you have to respect indefinite scalability. Our world is 3D, and you can make little tricks to make toruses embedded in a thing, but it has other consequences."

Here's more stuff about the Moveable Feast Machine:

https://news.ycombinator.com/item?id=15560845

https://news.ycombinator.com/item?id=14236973

The most amazing mind blowing demo is Robust-first Computing: Distributed City Generation:

https://www.youtube.com/watch?v=XkSXERxucPc

And a paper about how that works:

https://www.cs.unm.edu/~ackley/papers/paper_tsmall1_11_24.pd...

Plus there's a lot more here:

https://movablefeastmachine.org/

Now he's working on a hardware implementation of indefinitely scalable robust first computing:

https://www.youtube.com/channel/UC1M91QuLZfCzHjBMEKvIc-A


Check Visidata.


Can't disagree with a rant if you can't read it.


I feel like that has to do with the Twitch/streamer culture too, where lots of people have big, visible mikes.


Maya and Max contribute less than 8% of Autodesk's total revenue[1].

[1] https://www.statista.com/statistics/416285/revenue-of-autode...


Don't click this link, it's a "create account" trap/paywall.


This is a really excellent comment.


check out display.land


I totally respect your opinion, but I think I fundamentally disagree with the idea that learning = pain. Project-driven learning seems more effective when the project is something that excites the learner. I might suggest that your ability to learn test-driven development in Go is actually driven by the fact that you're interested in Fibonacci calculators.


Interestingly there is a concept in learning research called "desirable difficulties". The idea is that learning something, forgetting it and then struggling to learn it again leads to faster learning times. It may be that struggle is inherent in the process of learning something complex. If you equate struggle with pain, then I think it makes some sense. I have found that the best learners I know are also a little bit obsessive. They can't let go of a problem. So for them, the struggle in inherent in the way that they operate. For others, I think, the struggle can seem daunting. I think it's reasonable to suggest to those people that when learning something complex you are likely to place yourself in uncomfortable situations. However, it's important for those people to realise that the discomfort is not harmful and, in fact, can be challenging and rewarding.

Often I think it is quite similar to the situation where some people can not conceive of doing endurance sports. They view it simply as pain, or at the very best boring repetition. Others thrive on it. But if you want to learn how to enjoy endurance sports, it's probably a good idea to acknowledge that there will be times where you will be uncomfortable (possibly intensely so).


> I think I fundamentally disagree with the idea that learning = pain

I'm not OP and my interpretation is probably not what he meant, but when I saw that line, I wasn't seeing learning as pain, but more so as living the pain that justify the better practice.

> Project-driven learning seems more effective when the project is something that excites the learner.

I agree completely on that, but we can easily screw up that part by going too technical too quick. It transform a quick and fun project, into something much bigger (learning take a long time) with much less direct result. Fibonacci is a great example I think because it's so quick to achieve, there's so much potential to improve it in multiple ways, and it doesn't require much technical knowledge of the development environment.

When I tried learning React, I was going for a fun quick project, but then I thought, well if I'm going to do that, I'll go with some database to store my data, an authentication layer seems also an obvious requirement, all deployed in dockers, with some server rendering, etc... Which individually all could be quite simple to add in crude ways, but theses decisions all stemmed from the fact that I wanted to reach perfection from the beginning, which make all theses individual feature now become something much bigger and much more complex to reach. If instead I just do the same project, no database, just a big array that I modify, stored in a cookie, no authentication, no dockers, no server rendering, I can build it much quicker, and thus learn what I need to improve much quicker too. Afterward I can add a database if that's what I want to go for, and then authentication, maybe some server rendering afterward and maybe, a docker to deploy it easily.


One way I try to combat that is by forcing myself to do things the simplest way (unless I am aware I'm making a technical decision I can't change later). Then I try to add more complex bits when necessary. As a bonus you learn the skill of "refactoring" etc.

For example, nobody is forcing me to use Docker from the start. I don't have to use it until I decide the development or deployment pain is big enough that I will learn how to use Docker. Similarly, why do server side rendering when you can have a perfectly working project without it. Add server side rendering later, when you project actually has real features. Not from day one. The temptation to use cool tech from the start is difficult to ignore at first, but after a few attempts to do a side project, only to give up before you've even started, it gets easier to appreciate stack simplicity :)

Trying to keep code as simple as possible from the start is more difficult, i.e. you might be structuring your code too naively and regret it later when you try to refactor. But as we know, abstracting too early creates similar if not worse problems.


Hey! I wanted to just tell you if you see this comment that your approach is more important than you may realize.

If you write internal tools for a company then it is SO powerful to write a front-end without a back-end.

Ultimately this gets into a method of Extreme Programming (viz. waiting on features until the absolute last minute) married to the philosophy of Domain-Driven Design.

In DDD you want to establish Bounded Contexts for language, then within that context you want every programmer to speak in terms that clients could understand. Being able to change your data model cheaply is deeply powerful here as you can just give someone an interface saying “it won’t save your changes yet, just try it and see how you would do your day-to-day work with it and tell me what doesn't make sense.” And then they will say (assuming an accounting app) “what do you do with purchases that don't belong to a contract, how do I input them?” and you say “what do you mean, I thought we were tracking purchases for contracts?” And they will be like “A pitch is different than a contract, but we still purchase things for pitches but we don't need all of these details for it.” And you're like “if I were to show you a list of both pitches and contracts, like what would that be a list of?” and they reply “that would be a list of projects!” and then you build a list of all projects into your app.

And then when you build the back-end, you have one database representing the bounded context, and it has a projects table (UUID id [PK], string name, UUID pitch_id [nullable], UUID contract_id [nullable]), a constraint to make sure that exactly one of those IDs is NOT NULL, and a purchase table that foreign keys to a project. the key point is not that this is a clever database structure, but that it was molded to the hands of the people who use it, by delaying the binding of that data structure into a concrete relational form as late as possible.


If you think that I am interested in Fibonacci calculators... I mean you are not wrong: I have a Gist I think on doing Fibonaccis by matrix exponentiation which is kind of fun. But in this particular case, no, I don't need to calculate Fibonaccis for any particular reason. It is just a nice project that I can do recursively and then refactor iteratively, and it requires getting some ideas about retrieving environment variables (is Fib[0] 1 or 0?) and command-line arguments.


I think NPM ecosystem contracts your thesis. People find it too unpleasant to buckle down and type out that combination of string functions to left pad. It is less cognitive work to down load some library. When I was younger the learning was easier but even then learning is etching new groves in your brain. If it doesn’t feel like you are changing your biology you aren’t getting real deep learning. Not that learning isn’t sprinkled with moments of happy enlightenment. There comes a moment when to truly understand something you have to totally push against it. I am presently learning AWS, AWS api and terraform. It is exhilarating and also very hard. First month or two I collapsed asleep at the end of each day and dreamt of HCL and so on. Now, six months later, I can help my coworkers get around obscure gotchas in AWS and terraform.


> People find it too unpleasant to buckle down and type out that combination of string functions to left pad. It is less cognitive work to down load some library.

That's not the case though - why would I reinvent the wheel when I have finite time to get something done? I love to suss out solutions to all sorts of things on my own time, but if I have to left pad a field, I've already got lodash, I'm using that.

In addition, when someone goes back to read the code "_.padStart" just makes sense to read.

I'm not saying that understanding doesn't lead in repetition and effort, but I think you're wrong on why people use libraries. It's not because people don't want to know, they gotta get whatever it is they are working on done.


Another reason to use a library is once you write that left pad function yourself, you have to go copy it around from project to project whenever you need it, then fix or improve it and copy the changes all over again. Second time you do that, might as well pull it into a library. And now you have a library that you lug around, might as well host it somewhere. In a few short decades you roll your own package management infrastructure and fill it with all kinds of useful things like string concatenation subroutines and macros for flow control primitives built on jmp instructions and stuff like that.

Might as well shortcut all that and see if there is something in standard library or in an existing package...


You might like Golan Levin's lectures on different forms of Experimental Capture. They are just collections of really good examples and previous work. He specifically has one about light painting and long exposure: https://github.com/golanlevin/ExperimentalCapture/blob/maste...


Some of these are mesmerizing, thank you for sharing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: