Hacker Newsnew | past | comments | ask | show | jobs | submit | roytries's commentslogin

I'm purely a hobbyist. But I was under the impression that D3D12 is so different compared to D3D11 that this would be a good time to consider both D3D12 and Vulkan. Is the choice for D3D12 standard because it 'sounds' familiar. Or is there more overlap with D3D11 than I was aware of?


D3D12 is very different from D3D11, but still has a lot in common. We started our port by reusing most of the same DXGI code and getting what we called our "sprites" rendering correctly (2D quads, used by the loading screen, and our debug UI). We were able to reuse a lot of code, including all of our existing HLSL shaders. None of that would work with Vulkan, at least without a lot more work (which we eventually did).

The parts that were D3D12-specific were definitely the parts that would overlap more with Vulkan: the work to switch to PSO (Pipeline State Objects), the addition of all the barriers all over the rendering code, the synchronization work, all of that work could be shared between D3D12 and Vulkan.


You do need a very different design to efficiently target D3D12 and Vulkan. You need to think in terms of pipelines, root signatures/descriptor layouts, tables of descriptors and resource transitions. But its quite easy to also target D3D11 in this way:

Pipeline state -> struct of the 3 or 4 state objects + a few values (this actually makes redundancy checking easier and can actually be a win :)

Root signature -> fixed pre allocation of binding slots (this can even be done statically with templates if your root signature is known statically)

Tables of descriptors -> arrays of handles

(edit: you obviously cannot do the kinds of bindless things with D3D11 that you can do with D3D12 and Vulkan. But its common to have both bindless and non-bindless code paths with D3D12 and Vulkan anyway)

Resource transitions -> nothing


I don't believe that Apple did not know about these initiatives before committing to building Metal. Apple was using the OpenGL graphics API (governed by the Khronos group) and AMD GPUs exclusively around this time period.

Note also that Apple is also in the highest tiers of members of the Khronos group. (see: https://www.khronos.org/members/list).

Vulkan was created from the Mantle API that AMD donated to the Khronos group. Mantle exist 2 years prior to any mentions of Metal. As a member of the Khronos group and a very important partner of AMD they would surely have information on what these groups thought the future would be of graphics-APIs.

I believe that Apple had other reasons for choosing to build Metal. But I'm afraid it will be hard to figure out what they were.


Your timeline is wrong.

Development of Mantle was announced at the end of September 2013 [1]. Metal was announced and released to developers in preview form in June 2014 [2]. Talk of integrating Mantle into OpenGL happened around August, 2014 [3].

The actual timeline is: Apple spends months-to-years developing a proprietary graphics API (Metal) for their A-series chips. About nine months prior to Metal's release to developers, AMD announces they're going to _start_ developing their own proprietary graphics API. And two months after Apple announces Metal's existence and gives developers access to the software, AMD floats the idea of donating its API to Kronos.

In other words, Metal v1 was pretty much complete by the time the AMD and Kronos even considered a partnership, and "Mantle going to Kronos" may have even been a reaction to the announcement of Apple transitioning to their own graphics API, rather than the reverse.

-- [1] https://www.forbes.com/sites/davealtavilla/2013/09/30/amd-an... [2] https://arstechnica.com/apple/2014/06/apple-gets-heavy-with-... [3] https://www.guru3d.com/news-story/amd-mantle-might-end-up-in...


I guess the debate then comes down to if Apple, as an insider, had already committed to development of Metal before they knew AMD was up to something.

With the public announcement of Mantle in Q3 2013, and the first release of Metal in Q2 2014. (With the first release of Mantle in Q4 2014 afaik). I still think there is enough slack in the timelines given that Apple must have had inside information. But I could be very wrong of course.


> had already committed to development of Metal before they knew AMD was up to something.

They might have, but it's sort of a moot question. In 2012-2014, Apple was supporting four different graphics processors across their various product lines: Intel, Nvidia, AMD, and Apple A-series chips. Even if AMD had announced Mantle the day before Apple started working on Metal, it would've been a solution that only covered the AMD category, whereas the first version of Metal was capable of running on all of those architectures. [1]

The only alternative to Apple rolling their own would've been Apple working in conjunction with Kronos to develop a next-gen OpenGL. I can't find anything that indicates Apple approached Kronos, so it's hard to say whether Apple decided against it for technical reasons (Apple may have felt it would have been too long a time frame to reach a final form, or the design-by-committee approach would not have resulted in a satisfactory-to-Apple result) or political ones (perhaps there was insufficient appetite for that scope of change until Apple announced it was abandoning OpenGL for Metal).

--

[1] https://en.wikipedia.org/wiki/Metal_(API)#Supported_GPUs


I had originally assumed that Vulkan was an open-source response to Metal just by the name alone- Vulkan resembles Vulcan, a metalworker god. But I suppose these naming conventions would be common when you're writing code that's "close to the metal."


Maybe you're thinking of this incident? https://status.cloud.google.com/incidents/1xkAB1KmLrh5g3v9ZE.... It was a few days earlier and took almost 2 hours.


It would be a huge undertaking, especially if the DirectX types leaked into the rest of the application. DirectX 7 is from 1999, games like Half-Life 1 used it. This is when GPUs were mostly fixed-function. While nowadays a GPU is almost as versatile as a CPU.

The hardest/largest step would probably be to get it into this century, with the latest version of DirectX 9 (2005, Windows XP / Xbox 360 era). The step from 9 to 11 is also quite big, but a lot of APIs have stayed compatible.


A viable solution for DX9 is using DXVK to emulate the old DX API under Vulkan, then add your own hooks into DXVK and transition to Vulkan in a more relaxed manner.

Now DXVK does not support DX7, but a quick search found dgVoodoo2, which does emulate DX7 under DX11. Maybe that or a similar library can be used as a stepping stone.

Regarding porting legacy apps to 64bit, the most problems i've seen were concerning old libraries (on Windows). That usually requires replacing old libraries with new a version, and fixing includes. I've seen only a handful of bugs arising purely from 32bit vs 64bit differences.


Unfortunately the swapchain and rendering is vastly different from DX9 to DX11, the latter being now more similar to Vulkan.

So maybe it could work to port the codebase to DX9 but no further really.


How necessary is porting it, and for how long will old DirectX versions continue to work?

Having no knowledge of the code, I imagine if you have a functioning graphics layer, most of the work would happen on the underlying physics models and high level drawing/scene APIs, not directly interfacing with DX.


Hm.. I would have thought getting to DirectX9 would be easier, as directx9 still had some support for the fixed function pipeline.

What I'm sure you were getting at is a proper conversion to DX9 making use of the shader based pipeline.

That said, looking at the code I don't think porting to proper shader based DirectX would be terribly difficult for anybody experienced in setting up directx in a shader based pipeline. Nothing looks too fancy.

Of course it could be made more complicated by actually making use of shaders to improve over the fixed function design, but that is not required for an initial port.


If ain't broke, don't fix it.


I don't understand how NFTs or blockchain in general helps in any way for this, or many other examples of this.

All an NFT is is an encrypted hyperlink in the blockchain. The hyperlink points to a resource on someone's server. I think people forget that the NFT doesn't store the real thing. All an NFT is is basically an encrypted DNS server with some ownership data attached to it.

Right now you could showcase your Fortnite trophy in SteamVR IFF Valve and EPIC decided to integrate this. Using regular boring techniques.

Blockchain doesn't make this easier in any way. All Valve could do is get what trophies you own from the blockchain. But they'd still have to integrate with that particular blockchain and with the systems from EPIC that host the actual thing.

Having a regular database/account system would be way more efficient and easy to use tbh. For example I can already showcase my Tweets on LinkedIn due to regular boring integration. Data from two different companies shared by establishing that I am the owner of both things via a regular old token. Data transferred via an API.


> All an NFT is is an encrypted hyperlink in the blockchain.

So if someone moves urls around on the server or if the company goes bust, that link is dead.


Yep. I'm curious what while happen when the NBA decides it doesn't want to license its video clips to Top Shots anymore.


Most NFTs use IPFS for this reason.


Is this a quote that comes from somewhere? I see multiple people talking about this 'analogizing a list of funny names to genocide'.

I think its been properly debunked multiple times in the comments here to say its untrue. Just wondering where it comes from that people keep commenting it so strongly.


Note that when you're poor your surroundings are usually also poor. So even if you are young and not yet burdened by health care costs and maybe even have free/good education you might need to drop out or at least spend a lot of your time and money to take care for your relatives.

Only when everybody you care about has a stable health/housing/food/(work) situation you get a chance to do something for yourself. Only if people around you are privileged enough to they have time to invest in you. You can start to build wealth. I don't mean with investing money. Maybe just by

- Lending you their garage - Allowing you to work without income for a few months - Taking care of your kids while you work a few times a week - etc...

TDLR: if all you and your surrounding is doing is directly aimed at surviving you cannot build wealth.


The majority here don’t understand the downward pull of poverty. Entrepreneurship could work with UBI or a similar solution where those at the bottom are allowed stakes. Currently they are not. They are exhausted working 40 hour weeks on ten dollars with no healthcare.


"Is anything with future value being build by European companies?"

Ehhhhh, the machines that are going to be in the chip factories mentioned here for example (ASML).


I don't know. How can you gauge the effectiveness of your model without good data? A model can be wrong, but it can also be changed/redone/rebuilt. Without data you can neither develop nor verify imho.


It gives you a working MVP and a lower bound on what's possible. Tnen you can iterate either on model or data or both - data can also be further cleaned, changed and expanded. How can you gauge ROI of your data cleaning/collection efforts without it? Maybe an extra day you've put into it made a difference, maybe it didn't and you'd have better spent it on modelling or other tasks.


All you need(ed) was an e-mail address that ended in @myuniversity.tld and use that in the registration at the shop.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: