AMD’s Reddit AMA – All Questions & Answers

In case you missed it yesterday AMD held an “Ask Me Anything” over on Reddit. The AMA started at 10AM central time and concluded at 5PM. All of the questions were fielded by AMD’s technical lead at Radeon Technologies Group, Robert Hallock. If you want to check out the AMA you can head over to Reddit, but we have listed all of the questions and answers below for you with the questions highlighted.

AMD-RTG-AMA-Reddit

Did moving to the 14nm FFet node after being stuck in 28nm planar transistor node present any significant challenges that were different from previous shrinks?
Every node has its little foibles, but I am not quite in a position where I can disclose the level of information you’re probably looking for. I like keeping my job. 😉 However, we know that people are very interested in the process tech and we intend to publish a lot more information on the architecture and the process before Polaris parts arrive mid-2016. I know this isn’t quite as good as getting an answer today, but I hope you can see we’re thinking about questions like this already. 🙂

Can you discuss the yields on the Polaris dies right now?
I am not privy to yield information at AMD.

Did the recent earthquake in Taiwan affect the production schedule in any major way?
No.

How did Mantle effect the development on DX12 besides catalyzing progress, was there a significant amount of Mantle going into the development of DX12 or was it just the philosophy of low CPU/Driver overheads?
I like to say that Mantle was influential. Microsoft was one of several parties that had access to the API specification, documentation and tools throughout Mantle’s 3-year development cycle. We’re glad Microsoft saw we were doing the right thing for PC graphics and decided to spin up the DX12 project; DX12 has been pretty great for performance and features.

Having each eye handled by a separate D-GPU would be of greatest benefit in VR, so is there going to be an increased focus on CrossFire and LiquidVR support in preparation for big VR launches?
This is something we’re intensely interested in. LiquidVR SDK has a feature called Affinity Multi-GPU, which can allocate one GPU to each eye as you suggest. Certainly a single high-end GPU and/or dynamic fidelity can make for a good VR experience, but there are gamers who want unequivocally the best experience, and LVR AMGPU accomplishes that. As a recent good sign of adoption, the SteamVR Performance Test uses our affinity multi-GPU tech to support mGPU VR for Radeon, whereas SLI configs are not supported.

How are you guys going to cool the FuryX2? Is it going to be a blower like the dev kits Roy Taylor has been posting or is it going to be a liquid cooling loop like the 295×2? Also any idea of a price point?
With all due respect to Tizaki, he was erroneous in placing dual Fury on the list of kosher topics. I am not in a position where I can discuss that product.

In regards to Polaris can you discuss any of the GPUs we could be seeing sometime in the near future, like the Fury and FuryX successor, are we going to see a low profile card like the Nano again?
We will discuss specific SKUs and form factors when Polaris launches mid-year.

Is there anything you can disclose about GPUOpen tools being deployed in games that are in development, or even some of the interesting application people are trying to make using the compute tools?
Watch for GDC. 🙂

As far as FreeSync monitors, do you see an increased adoption of FreeSync technology from monitor and panel manufacturers because of HDMI support?
HDMI support is directly responsible for expanding the list of FreeSync-enabled monitors from 30 to 40 overnight. HDMI is the world’s most common display interface, and there’s a huge industry economy of scale built around it, especially at the mainstream end of the market where users with modest GPUs are correspondingly most in need of adaptive refresh rates. Porting FreeSync to HDMI was highly requested by our display partners.

When will we see Vulkan support in the Linux driver, and will that driver be only available on top of AMDGPU as part of the hybrid driver model.

Also will Polaris be supported day 1 by the AMDGPU driver, or will it require changes to the current Power Play code that supports reclocking on the 380 series hardware?

I’m basically just trying to figure out if and when I need to buy a new card, and if Polaris is an option.

The Vulkan Linux driver will be released soon, and it will only be a part of amdgpu.

I do not know the current roadmap for our Linux drivers, and I am generally unable to disclose anything that forward-looking if I want to keep my job. Thanks for understanding. 🙂

What happened with TrueAudio? This was the most exciting feature of the Radeon GPUs, but there isn’t any new game that support it.

Why don’t you build a special support around TrueAudio, so the hardware might convert 5.1/7.1 channel audio to headphone stereo, much like how CMSS-3D works in the Creative drivers.

A lot of PC sold with integrated GPU + dedicated GPU configuration. Why not use the integrated GPU for compute offloads? Windows 10 allows a lot of interesting technique to share the memory between the latency and the throughput optimized cores.

Will the Kaveri and Godavari APUs get an updated BIOS to support HSA? Even if the chips don’t designed for the 1.0 specs.

I like the new APIs (DX12 and Vulkan), but there are some features that might come handy, and the consoles support these. For example: ordered atomics, SV_Barycentric, SIMD lane swizzles. Is there a chance to support these with some extensions on PC?

1) TrueAudio still finds regular use in the console space. However, on the desktop PC side there seems to be generally less interest in complicated soundscapes for PC gaming. Going forward, we’re interested in exploring TrueAudio’s rich positional capabilities to augment VR experiences–I think this is probably a better use for the technology.
2) For the precise reason you pointed out: most audio hardware already supports this functionality, and a duplication of effort is not a worthy use of resources.
3) DirectX 12, Vulkan, Mantle and HSA can do this.
4) KV/GV already support HSA, however the hardware is compliant with the HSA 1.0 Provisional specification. The 1.0 Final specification increased hardware requirements that are met by Carrizo. More info.
5) DX12 and Vulkan both support API extensions. Especially Vulkan. 🙂

Can we put the DX11 to rest, are we going to see DCL’s added or not?

Can anything be said with more detail about how RTG views its current drivers and what its striving for?

How soon can linux users expect to see some love with some major gains in the Linux driver?

Any chance of integrating features or advanced settings that are featured in programs like RadeonPro, RadeonMOD and ditching raptr outright?

Are we going to see any rebrands this year or will everything be a fresh baseline with 14nm on the GPU side?

With the benchmarks we’re seeing that take advantage of Async Support, will this be a feature require additional work from the devs or is it something easily implemented to where it will be the mainstream?

Any tricks up AMD’s sleeve that we might see in Vulkan that may or may not be an easy addition to DX12?

Is there a specific reason the drivers still have issues of downclocking when a small program can be used to stop the downclocking issue?

Are there any plans to increase resources (including software engineers) allocated to Radeon Software including Drivers, GPUOpen materials and other products?
AMD Dockport, anything we can expect to see from this?

1) Because DCLs are useless. They’ve been inappropriately positioned as a panacea for DX11’s modest multi-threading capabilities, but most journalists and users exploring the topic are not familiar with why DCLs are so broken or limited.

Let’s say you have a bunch of command lists on each CPU core in DX11. You have no idea when each of these command lists will be submitted to the GPU (residency not yet known). But you need to patch each of these lists with GPU addresses before submitting them to the graphics card. So the one single CPU core in DX11 that’s performing all of your immediate work with the GPU must stop what it’s doing and spend time crawling through the DCLs on the other cores. It’s a huge hit to performance after more than a few minutes of runtime, though DCLs are very lovely at arbitrarily boosting benchmark scores on tests that run for ~30 seconds.

The best way to do DX11 is from our GCN Performance tip #31: A dedicated thread solely responsible for making D3D calls is usually the best way to drive the API.Notes: The best way to drive a high number of draw calls in DirectX11 is to dedicate a thread to graphics API calls. This thread’s sole responsibility should be to make DirectX calls; any other types of work should be moved onto other threads (including processing memory buffer contents). This graphics “producer thread” approach allows the feeding of the driver’s “consumer thread” as fast as possible, enabling a high number of API calls to be processed.

2+4) Sort of. We’re clearly interested in doing a major feature-rich driver every year, and that is on-track for 2016 as well. We constantly crawl the various PC gaming communities to keep our fingers on the pulse of what people want to see in terms of features. That’s why custom resolutions were added in Radeon Software, for example. But for us it is always a delicate balance of deciding what to leave in the 3rd-party tools vs. what should be incorporated directly. Not everything is safe or easy for the everyday user that’s not likely to be a GPU enthusiast participating in my nerd AMA. 😉

3) It would probably be better to asks Linux driver questions of our guru Graham Sellers. He’s a much better authority than I am on Linux and the Linux driver. I’ll be transparent: I’m a Windows gamer, and always have been, and probably always will be. I’m not very familiar with Linux.

5) We will discuss SKUs when Polaris debuts mid-year.

6) Async Compute does not require any changes to a developer’s shader code, so it is relatively straightforward to implement. Thus far every DX12 app and test has included support for it, so that’s encouraging.

7) Comparable capabilities, though Vulkan more enthusiastically supports API extensions that could be useful to future hardware. NOTE TO JOURNALISTS: This does not imply there are super secret hidden hardware features that can only be exposed by Vulkan. I am only pointing out that this is one of the powerful aspects of Vulkan for HW vendors like us.

8) I believe our release notes make it very clear we’re aware of the issue and are working to resolve it. The fix will be ready when it;s ready.

9) I cannot comment on AMD staffing.

10) Dockport is from our client (CPU/APU) side of the business, so I’m not familiar with its goings on.

11) I hope I numbered these correctly.

Will Polaris GPU’s use GDDR5X? [if it’s not already using HBM2]

How different is the Polaris architecture compared to the current generation GCN? Is it GCN 1.3 or GCN 2.0?

How is tesselation implemented in Polaris?

What is the Tier of support for DX12 features like conservative rasterization, tiled resources, resource binding, rasterizer ordered views, order independent transparency, etc
What happened to TrueAudio? Now is the perfect time for 3D audio technologies given the commercial introduction of VR.

Can you guys work with Khronos and release a full featured OpenSL? Maybe even donate TrueAudio to Khronos to kickstart the process? Immersive VR requires not only low-latency visuals, but also low-latency 3D sound.

Why is improvements in cooling technology like the Sandia spinning heatsink, TIM with good lateral conductivity not seen in commercial products yet?

Why the name GPUOpen? As much as it is a step in the right direction, the name is a bit…. unimaginative.

How independent is the RTG to make decisions?

As a general comment, you’re asking many architecture-specific questions. I cannot answer them right now, but please know that we understand there’s a tremendous appetite for specific µarch and process details, and we intend to answer them before Polaris launches mid-year. I think this generally addresses questions: 1, 2, 3, 4. But on the point of #2: We consider this 4th gen GCN.

For TrueAudio, see this. WRT OpenSL, decent point I’ll bring it up internally.

For TIM: I am not a material sciences expert so I probably cannot answer your question to the depth that you want. However I will say that advanced TIMs can get very expensive very fast, and that is probably the hurdle you’re seeing on commercial products.

GPUOpen: It’s a straightforward name. I like it.

Independence: There’s no objective metric by which I can measure an answer to this question. These discussions happen above my level, and the arrival of RTG has not substantially changed my day-to-day job function.

OH YES! Me first! Ok, so when is gonna be the EXACT release date on those Bristol Ridge APUs and Polaris GPUs?

June 2, 1997.

Are the cards going to have any overclocking potential, or is this going to be another fury situation?

Will the cards come with DP1.4?

We will discuss specific SKUs and overclocking capabilities at product launch in mid-year.

They will come with DP1.3. There is a 12-18 month lag time between the final ratification of a display spec and the design/manufacture/testing of silicon compliant with that spec. This is true at all levels of the display industry. For example: DP 1.3 was finished in September, 2014.

Hi, how did you all get your jobs at AMD? Where did you all go to university, what did you study, etc? What kind of person would you recommend your job for? What’s it like? What’s the best part of the job, and what’s the worst?

And how would you recommend someone start teaching themselves about the more advanced parts of today’s processors? Block diagrams blow my mind and I’ve always wanted to learn more about how these things work.

1) I used to be a hardware reviewer at a .com. You make contacts in the industry, and you make friends. One day you post on Facebook that you’re looking for a new apartment, and one of those contacts ask if you want to move to Toronto to work for AMD. Of course I said yes! Best decision of my life.

2) I did not complete university. Instead I spent about 6-8 hours of every day for 3-5 years learning about PC hardware, reviewing that hardware, analyzing the industry, and writing furiously about that industry. I did over 500 feature-length articles during that time.

3) I think it’s like any job. Sometimes it’s incredibly stressful, especially surrounding a product launch or a tradeshow, and other times it’s nice and laid back. I have really fantastic coworkers that keep me level, and it’s nice to be surrounded by cool technology all the time. The access to tech is probably the coolest part, but the frequent travel is probably the worst part–it’s just exhausting.

4) Read every architecture deep dive you can get your hands on! For both CPUs and GPUs. That will help you start to understand what each component of the silicon does, and how it affects performance.

Will this finally be the year of AMD on Linux?

Our Linux driver will be released quite soon.

Hello RTG. The thing that I would like to know is that if the Linux AMDGPU driver will support GCN 2(hawaii) hardware. Thank you very much!

amdgpu already supports Hawaii.

Thanks for taking the time to answer our queries, Thracks! Here are mine:
1) Following Anandtech’s look at AMD laptops, I’d like to know if there’s a realistic possibility of having high-end AMD graphics make a return in gaming notebooks. Having NVIDIA’s hardware as the only option is not healthy for the industry, and I’d hoped that you would have some way of shoehorning a Radeon R9 Nano into a high-end 17-inch laptop chassis by now.
2) Why are the GPUs inside the Falcon Tiki cases that Roy Taylor teased on Twitter using blower-style coolers? Is it a hybrid water-cooling kit, or will the dual-GPU Fury use this design for compact chassis?https://twitter.com/Roy_techhwood/status/704022817478569984
3) Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech’s tests? Will changes in AMD’s driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully?
4) Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?
5) How far along are the efforts to port over the changes in Radeon Crimson Software to Linux? When can I reasonably expect to be able to switch over, and not have issues in games with my Radeon R7 265, using AMD’s drivers?
6) Daisy-chained Displayport 1.2a monitors with FreeSync… does it work?
7) FreeSync on a Displayport monitor connected to a Thunderbolt dock… does that work? (see here [https://www.youtube.com/watch?v=NshXgisNly4] at 14:24)
8) This post of yours, Thracks, on Facebook . Is this hinting at the work AMD’s been doing to drive the DockPort standard? I haven’t heard about that for at least two years, and I fully expected to be able to buy DockPort stuff by now.

1) Polaris. 🙂
2) This is a project that I have not personally worked on or been involved with at AMD, so I could not answer.
3) We will add DirectFlip support shortly.
4) This article discusses the issue thoroughly. Quote: “games sold through Steam, Origin and anywhere else will have the ability to behave with DX12 as they do today with DX11.”
5) This is a question better posed to Graham Sellers, our Linux driver Guru.
6) It should. I have not tried.
7) That depends on how well the dock transports the DP information, and if it is fully compliant with the DP1.2a standard. Should be fine, though.
8) No.

1) TrueAudio still finds regular use in the console space.
Could you please expand a bit on this? More precisely:
1 – Does the PS4 have the exact same audio DSP block as TrueAudio in discrete GPUs?
2 – When is it used? Does the console use it automatically every time a compatible middleware for audio is used in development? E.g. every time we see “Wwise” and/or FMOD in a PS4 game can we assume TrueAudio is being used?
3 – If so, why isn’t this simply ported to PC games in multi-platform titles?

1) No.
2) Same dependencies as PC: when the developer choose the TrueAudio-enabled versions of any Wwise/FMOD plugin.
3) Lots of things don’t carry over from PC to console, and vice versa. I am not privy to the details as of why.

How far off are HDR monitors? AMD hyped up them at CES, and I think they would be great for game like Elite Dangerous.

We expect the industry to arrive on HDR displays in the second half of 2016.

With the lack of overlay and fraps support in many DirectX 12 games is there a way that gamers and reviewers can properly benchmark games in the future? Will Hitman have FCAT support? Will there be a way to do benchmarking directx 12 games, especially ones that use the Windows compositing engine (are from the Windows store).

It will be up to developers to make performance analysis applications that are compliant with DX12, just like they did for DX9/10/11. App developers can also incorporate high-performance counters and logging suitable for analyzing performance.
To your second question: “games sold through Steam, Origin in anywhere else will have the ability to behave with DX12 as they do today with DX11.” Source.

Tell us more about HDR.Does this tech adds cost to monitor and how much?When we should expect first monitors with this tech?How many games will support it ?How this gonna work with different panel types (tn/ips).

1) Yes it does.
2) Probably 2H16.
3) Unknown at this time. Developers will need to adjust their tonemapping algorithms to be HDR-aware. This is not a trivial task, but also not incredibly complicated. Some notable devs have already expressed interest to us, so I would expect games to follow around the time monitors start appearing.
4) HDR is best articulated by bright and high-contrast panel technologies like OLED. But local-dimming LCD is also viable on a variety of panel types, especially the contrasty ones like VA.

As a followup to #3, I know that HSA has been pushed for a while now and that Mantle, DX12 and Vulkan support using non-paired GPUs, leading me to think that the future should be the heyday of CPUs with an integrated graphics chip. (They also make sense from a usability standpoint; if my Radeon card dies I still would like to be able to use the computer, even if it isn’t to game.)

Are there plans to integrate a GPU into Zen, and if so, how long will we have to wait for them to come out? Will the Zen CPUs always be the flagship chip design, or will the future see Zen APUs taking over?

Zen is the name of an architecture. There will be APUs and CPUs based on Zen. Zen-based CPUs will debut first.

Posting as its own comment for visibility: I know there are lots of questions Polaris’ architecture, memory configurations and market SKUs. Right now it is just too early for me to be disclosing this sort of information. I know that will be disappointing to many of you, and I apologize that the OP was not more clear about the boundaries.

I promise to return for another AMA when I can answer all of your Polaris questions. I intend to do right by you guys; you deserve that. You’ll just need to give me a little more time to do that. 🙂

not even if its all 14nm or some 16nm? Come onnnn. that doesn’t seem worth hiding

14nm GlobalFoundries.

does AMD support using DDU to completely wipe drivers?
Answer as a non-employee: DDU seems to work just fine, and I use it for every driver install because I’m picky about that sort of thing.

Answer as an employee: We have our own tool.

Previously, your next generation GCN GPU architectures were using arctic island code names ex: Ellesmere, Baffin, Greenland… As I understand it, the code names were changed to Polaris 10, 11 and Vega 10, 11. My question is, how do they relate to one another? Is Baffin Polaris 11 for example?
I don’t know how to answer this. Polaris 10 is Polaris 10, and Polaris 11 is Polaris 11. There are no other names.

Ok, safe to say I have more that a few questions – I understand a few of these may be very much “neither confirm nor deny”. Sorry in advance for the lack of structure, I’m just writing stuff as I think of it…
Can you say if the previously shown Fury X2 boards eg are final? If so, are they?
What direction is dual graphics going in? Between DDR4 and improvements to GCN the memory bottleneck is certainly being eased – can we expect an XDMA engine in future APUs?
Is AMD actively working with any game developers on taking advantage of HSA?
Bit similar to the above, but will GPUOpen include functions that leverage the capabilities of processor graphics cores, either through HSA or OpenCL? If so, will they support Intel’s “APUs” at all, or will you be accepting patches from Intel that would add such support? Is this made more awkward to address by the fact that more expensive “enthusiast-grade” CPUs are less likely to include graphics cores?
Any sign of Intel supporting freesync in the future? Would you even know if they were going to?
Is it true that Kaveri has a GDDR5 controller, but no products using it were released because of poor prospects for market positioning or the total lack of hardware ecosystem support?
Polaris is going to be in the next macbook, isn’t it?
ISN’T IT?
More serious question in that I actually think you might be able to answer unlike the above 2 or 3, how easy is it for monitor vendors to initially adopt freesync, and to add more freesync products once they’ve done their first? Do you see there being a point in the future where Freesync is a universal or near-universal feature of new monitors?
Where do you think VR will be in 5 years time, and what do you think the impact on ‘conventional’ gaming will be?
There’s a lot of buzz about “explicit multiadapter” in DirectX 12. Do you think DX12 and Vulkan are going to lead to more games supporting some sort of multiGPU technology (outside of one-chip-per-eye VR setups)?
Is there a trend towards multigpu implementations that focus more on user experience than pure fps as in Civ:BE, or was that a bit of a one-off?
What are you personally most looking forward to – VR, low-level APIs becoming commonplace, or the inevitable big jump in top-end “conventional” performance that will come sooner or later with the jump to 16/14nm?

1) I don’t know.
2) I don’t think there’s enough information being passed to a dual graphics configuration to benefit from XDMA.
3) HSA is not for gaming. It is purely GPGPU.
4) Half of the GPUOpen effort is for GPU compute. We are open to code submissions.
5) Intel has previously commented that future CPUs would support DisplayPort Adaptive-Sync. FreeSync is based on DPAS as well.
6) I’ve never heard this.
7) I only work on the PC side of our business. I do not know.
8) DON’T HURT ME.
9) It’s actually pretty easy. Many scalers in the market at the time FreeSync was introduced were technically capable of adaptive refresh rates, but there was no specification or firmware to expose that aspect of the hardware or to use the hardware in that new way. The DisplayPort Adaptive-Sync specification changed that, and gave scaler vendors a blueprint suitable for developing newer compliant firmware that could be accommodated by many of their existing SKUs. Of course other SKUs may not be compliant, and replacements were developed in time.

Once there’s a scaler in place, a monitor vendor is already on the hook for buying a scaler to make a display, but now they can simply choose one that offers DPAS support. We work closely with our partners to also make sure that the bill of materials they’re assembling is likely to meet our criteria for FreeSync validation and logo, which looks for specific qualities that are not addressed anywhere in the DPAS specification. Not all monitors make it.

The larger hurdle is tuning the monitor firmware for the particular LCD panel chosen. That’s an extra step of validation that is less intensive on a static refresh screen, but it’s not a disproportionate burden or anything. And once MFGRs get a sense of this overall process, yes, it does become easier to make more displays.

10) Gosh, I can’t even fathom. IT IS PURELY MY PERSONAL BELIEF/NOT REPRESENTING THE OPINIONS OF AMD that virtual reality will follow the model of all other consumer goods: higher refresh rates, higher resolutions, lower costs, more content. At AMD we want to see 16K pixels per eye at 240 FPS as an ultimate goal. We believe that this will be capable of simulating reality.

11) Currently unclear, but I hope so. EMA is awesome.

12) DX12/Vulkan make unconventional and superior mGPU configurations, like split-frame rendering, easier to implement. DX11 doesn’t even support SFR. We (the industry) are just now building broad infrastructure to explore the possibility, so it’s hard to calculate the trajectory.

13) I am personally most excited by the expansion of adaptive refresh rate technologies. I can never go back to any gaming experience that doesn’t feel like that magical “60 FPS” moment all the time. And that’s what FreeSync feels like: 100% smooth, 100% of the time. It feels like the GPU is so much faster than it is, even when the game is running at like 45 FPS.

Does Polaris have a dedicated h264/h265 encoder/decoder chip or is it the gpu that’s able to do that? I want a red team shadowplay otherwise I’ll have to upgrade to nvidia :/
Yes, it has h.265 encode and decode up to 4K.

Is GCN 1.0 support for the AMDGPU linux driver planned?
No.

Our Linux driver will be released quite soon.
different companies have different definitions of soon™
hours / days / weeks / months / years ?

If I could be more specific, I would. We’re not talking months.

I’m slightly surprised by the answer about HSA – of course it does nothing for drawing pretty pictures, but surely it’d be really beneficial for things like physics simulations? Am I missing something?
HSA isn’t suitable for gaming physics sims because gaming graphics APIs already have their own well-tailored solutions for GPU physics. It’s a needless reinvention of the wheel to apply HSA to the GPU physics topic.

1. Currently CrossFire does not work in borderless windowed for DX games. Yet it does work properly in OpenGL, Mantle, and I’d assume for Vulkan too. Since DX12 is in the spirit of Mantle, will we possible see windowed CrossFire support in those titles?
2. I’d assume DisplayPort 1.4 is off the table for Polaris, but is it possible we could see it with the next generation?
3. With DX12 we’re seeing a dramatic decrease in processor load, but is there any more room for optimizations in DX11 on the driver side?
4. Will we ever be able to use the outputs on the slave card in CrossFire mode?

1) Yes, this is possible now.
2) Way too early to say.
3) We continue to look at it, yes.
4) The memory copies required to drive SLS gaming on mGPU would annihilate performance.

Hello AMD! First I want to say I appreciate you guys soo much for what you have done for the games industry. Without you guys we would all be in a world of hurt honestly and your products have yet to dissapoint on my personal end!

I have 2 questions! Are my 7970’s “fully” capable of next Gen API’s or are there going to be new hardware innovations with proprietary features that will lock me out? I could never find a clear answer for this!

Also can you talk a little bit working with mark cerny on the ps4? It’s one of the fastest selling consoles of all time and uncharted 4 looks absolutely amazing, knowing that it’s designed on the hardware you supplied just blows my mind!

Thanks AMD you guys are seriously amazing!

I want to be clear that there is no graphics architecture on the market today that is 100% compliant with everything DX12 or Vulkan have to offer. For example: we support Async Compute, NVIDIA does not. NVIDIA supports conservative raster, we do not. The most important thing you can do as a gamer is to own a piece of hardware that is compatible with the vast majority of the core specification, which you do. That’s where all the performance and image quality comes from, and you will be able to benefit.

As for your PS4 questions: I work on the desktop PC side of our business, so I couldn’t really say anything useful about the PS4. 🙂

Are 490 going to be on 14nm as well? My 280x is still kicking it strong but i want to upgrade both monitor/GPU by the end of next summer. Speaking of which , what will happen to the 300? Are they going to be phased out like last year 200?
Polaris is 14nm. Plain and simple. Any time any company flips over to a new lineup, the old products that got replaced are a “while supplies last” kind of deal.

Hi my questions are all about current state of drivers and possible future updates to them :
1)Are there any plans to incorporate some of radeon pro’s features in future crimson drivers like texture lod controls, dynamic vsync ,mip map quality ect. ?
2)Will there be more antialiasing modes or options incorporated in future drivers ?
3)Could radeon settings have custom fan profiles and core voltage controls implemented in future drivers?
4)How satisfied are you with the current state of crimson drivers?
5)Will there be more tessellation improvements in the driver side for current 300 series cards?
6)Could ssao or hbao methods be implemented in future drivers for older games that do not possess any of these occlusion methods?
7)And finally Will there be any frame latency improvements?

Hope it isn’t much to ask and thanks 🙂

1) We constantly monitor the community’s feature requests and evaluate whether or not the feature is worthy of bringing into the driver, or leaving in a 3rd-party tool (which uses AMD APIs and interfaces to expose these features). We take this feedback seriously, which is why, for example, we added CRU support directly in the driver in Crimson. Nothing is off the table.
2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.
3) If there’s a “big enough” pool of requests for it, yes, it is possible. NOTE TO JOURNALISTS: I am explaining how we weight feature additions. Please do not misconstrue my explanation as a confirmation.
4) I think it’s a big improvement over CCC. Huge. I love the UI. I am excited to see what gets added in the big feature driver for 2016. 🙂
5) 8-16x tessellation factor is a practical value for detail vs. speed, and this is what our hardware and software is designed around. Higher tessellation factors produce triangles smaller than a pixel, and you’re turfing performance for no appreciable gain in visual fidelity.
6) Shader injection is a risky business when you deploy a piece of software to tens of millions of people. Shader injection can easily lead to rendering errors, game crashes or BSODs. Few people think about the sheer scale of our userbase, or how a feature that might work for a small number of people tinkering with SweetFX might break down when exposed to millions of people.
8) We profile games and, where able, adjust the pre-rendered frame limits to reduce frame and input latencies to their lowest possible values.

Any plans for a new recording software like shadowplay inside the crimson software?
The AMD Gaming Evolved client does this. It uses our VCE blocks inside the GPU for hardware-accelerated recording and streaming.

What if someone is majoring in both Electrical & Computer Engineering and Media Communications? I’m just interested in seeing if there’s any position that would converge these two together.
Probably a job just like mine. Don’t take my job, pls.

I’ve got the XF270HU with a FS range of 40-144Hz, why wouldn’t that have LFC?
XF270HU is the IPS variant of the XG270HU. LFC is supported.

Do folks at AMD expect any significant lead over Nvidia, performance wise, as the new graphics APIs develop and take root?
We’ve lead in performance on every DX12 app/test so far.

Is there anything new at all you can tell us about Polaris or is this AMA about your Guinea pigs and imagining you in your boxers playing Rocket League?
Hey, I said they were gym shorts dude.

AMD Gaming Evolved suggested to install Radeon Software Crismon Edition on Windows 10. Now I have two AMD applications installed -> confusing. Is there a plan to remove one?
AMD Gaming Evolved is our optional game streaming, game recording, game optimization client. You can remove it any time you like. Radeon Software Crimson Edition is our graphics driver, I imagine you wouldn’t want to remove that.

I’m sorry, but a lot of people don’t like that thing, and a lot of people are also having technical problems with it. I think it was a mistake buying it. You also killed RadeonPro with this move, which was the only tool with enough settings, and Radeon Settings still isn’t a good replacement for it – Radeon Settings isn’t even able to read the correct Windows language setting variable …
25 million people actively use the Gaming Evolved application every day. Not just installs, but active users. I think a little perspective is warranted.

As for RadeonPro, let me be clear: John Mautari decided to leave the project for a job at Raptr. That’s his perogative, and it’s his life.

What would you say would be the lowest overall boost we’ll see in fps performance moving to vulkan? I mean, presumably vulkan performance will be better than current performance, what’s a very conservative, rough estimate for the kind of performance boost to expect? 5%? 10%?

There’s been a lot of talk about AMD drivers hampering performance. Would you say that drivers aren’t quite doing AMD hardware justice? Presumably mature vulkan drivers for AMD hardware won’t be hampered in the same way, if the driver issue is a real problem. What kind of increase in performance owuld you estimate we could see?

Will vulkan games and applications be more sensitive to CPU core count? Will 4 core, and 8 core systems see better scaling with vulkan than with current apis? What rough numbers can you throw out there?

What are you most excited about about AMD chips that are on the shelves now? Process shrinks haven’t been getting easier, but it seems like you’ve been able to rise to the challenge. The media seems to be focusing on your hbm advantage. Sadly I haven’t been able to follow things too closely. A quick google points to some hits on nvidia being tied to samsung’s 14nm process. Is there any good news you’d like to share about any early predictions you can make? Presumably the samsung process will be targeted at fairly low power, low voltages. Will that be a disadvantage for them going up against chips like the furies, and the r9 380s?

I don’t think I’ve noticed a lot of press on your ARM chips. Wikipedia mentions they were released 2h ’15 ish? Do you have any highlights on them? I’m guessing they’re targeting VM/cloud computing? I believe the industry’s been dipping it’s toes in that pond. Your chips are roughly in line with the competition?

iirc AMD had a flash memory line. Was that the sort of commodity flash you’d find in an SSD? I’m asking because it struck me the other day. I actually own two sticks of AMD RAM, putting that together with your flash production I wondered why there was AMD ram, but no AMD ssds. Did the flash production go over to global foundries? I guess they’re sort of a white label now? Their customers probably don’t want to compete with AMD ssds?

Nvidia seems to have seen some success with their tressfx tessellation. That success wouldn’t have worked if nvidia hadn’t invested a lot in the hardware, but it also required a big investment in middleware, and on top of those two things, they needed games and game engines to use them. Can any lessons be learned from that?

Do you mind saying, in rough terms, how similar the windows and linux drivers code is? Is there any shared code?

1) This is just me wildly ballparking based on other low-overhead APIs. And it could be total BS when all is said and done: but I think 7-15% in GPU-bound scenarios, and up to 25% in scenarios where the game is binding the CPU.
2) One of the points of low-overhead APIs is to move the driver and the run time out of the way, and minimize their impact to the overall rendering latency from start to finish. Vulkan and DX12 both behave like this. And I want to be clear that they are designed like this because all graphics drivers need to get out of the way to achieve peak performance from an app.
3) Generally APIs like Vulkan are designed to eliminate CPU binding on modest CPUs, and then expand performance on powerful CPUs. This has the effect of raising both the floor and the ceiling of CPUs vs. higher level APIs.
4) I am always most excited about FreeSync. Any gamer who knows that “magic moment” where the game FPS matches the display refresh rate, and how smooth that can be, is crazy not to want that all the time. But that’s what FreeSync gives you: that perfectly liquid smoothness at damn near any frame rate.
5) We are using the more advanced 14nm node. NVIDIA is 16nm.
6) You ask about ARM, but that line of business is furthest from my role. I cannot answer these questions.
7) We used to own a company called “Spansion” Spansion was sold off. Now we contract with third-party DRAM and NAND ODMs to build products to our specifications.
8) NVIDIA had success with TressFX because we designed the effect to run well on any GPU. It’s really that simple. They were successful because we let them be. We believe that’s how it should be done for gamers: improve performance for yourself, don’t cripple performance for the other guy. The lesson we learned is that actual customers see value in that approach.
9) I am not a developer, and not privy to the codebases.

2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.

Which leaves VSR as the only driver-enforced choice.

Could you please explain why the range of VSR resolutions for GCN1 and GCN2 is more limited than for GCN3 GPUs (the formers basically can’t output 4K in 1080p/1200p monitors)?

Could this be overcome in future driver updates?

We have hardware that performs real-time frame resizing with 0% performance impact above and beyond the change in resolution selected by the user. This hardware became more advanced as we iterated GCN.

It is possible to explore shader-based methods, but that can introduce specific performance penalties of their own.

We will continue to look at adding additional resolution options to VSR.

1) Is there a serious focus within AMD on exploiting machine learning applications through the Radeon hardware? We have seen compute capabilities of the Radeon GPUs being (literally) years ahead of the competition, yet the AMD does not seem too invested in supporting the (hugely) growing machine learning landscape.

2) Are there any plans of AMD entering the autonomous driving market? Or at least putting out dev platforms similar to Jetson TX1 and Drive PX2?

3) Will there be any further GPUOpen surprises this year in scientific computing and machine learning/AI? 🙂
Thank you for your thorough answers elsewhere in the thread!

1) Machine learning is more of a FirePro question. We just released some deep learning code on GPUOpen:http://gpuopen.com/compute-product/hccaffe/
2) Not that I am aware of.
3) Much more GPUOpen news coming at GDC.

1 – Is there any place where we can see the VCE encoding capabilities for each GCN GPU so far?

2 – In terms of encoding quality, Steam In-Home Streaming using AMD’s hardware acceleration isn’t great, specially when compared to Steam’s own software encoding. Are there any plans for improving the encoding quality on current AMD GPUs for IHS?

1) 1080p on GCN1.0. Up to 4K60 on Hawaii, Tonga, Fiji.
2) Encoding quality is governed by the encoder software, be it a GPU or CPU. If the quality isn’t up to expectation, then the implementation of the encoder needs work.

any plans to work with watercooling vendors like EK, XSPC, and bitspower to make waterblocks for the Polaris cards. or is that left up to the board partners
When you see waterblocks very close to the release of our GPUs, that’s because we worked with the block vendors and gave them PCB layouts.

Is AMD ever going to fix the flickering lines in less demanding games when using freesync?
https://community.amd.com/thread/194556

This is related to this item in our known issues list: “Core clocks may not maintain sustained clock speeds resulting in choppy performance and or screen corruption”

We’re working on it.

Will the FuryX2 have HDMI 2.0 or will we have to get an adapter from Club3D or Accell?

All Fury-based GPUs have HDMI 1.4.

So this might be better directed at either the CPU side of AMD or OEMs, but I’ll ask anyway.Your lower power APUs seem really well suited to an HTPC build, but Carrizo and Carrizo-L seem nowhere to be found when it comes to Mini-ITX boards or mini PCs (with the exception of the 65W desktop Carrizo parts which lack a GPU). Any chance this will change? Is this the result of some conscious decision on AMD’s part or is it just a matter of OEM interest?
It is a conscious decision for us to design a mobile-specific APU, and sell it only into mobile.

I’ve been a user of multi-GPU cards for a long time. 5970, 6990, 7990… The problem is that we’re in such a minority, support on the developer side is lacking. There’s often problems with flickering, and some features, such as SSAO, due to it requiring the whole scene to be rendered first, I’ve been told will “never” work on multi-GPU. Often the “fix” recommended by developers for multi-GPU issues is to “just run it in fullscreen windowed mode”. Of course some users will do that, and seem satisfied, but most of us who bothered to buy the multi-GPU card in the first place, are infuriated by this answer, because we understand that they’re effectively telling us, in a round about way, to turn off one of the cores we paid good money for.

The latest Unreal engine doesn’t even support multi-GPUs.

The situation is looking dire. My concern is that multi-GPU is dying, and I’m not sure how to save it.

My question is, essentially, for you to address this topic as best you can. I apologize for the waning of specificity in the question.

It is my hope that DX12/Vulkan will emphasize mGPU by making mGPU solutions more flexible and obvious to the developer. We do what we can in the AMD Gaming Evolved program to push developers towards making engine decisions that benefit mGPU, too.

I’ve already asked a whole bunch of questions and had really good answers. One more has sprung to mind – I gather the interposer on Fiji cards is rather delicate and general advice is not to change coolers around as it’s easy to break it when cleaning thermal paste.

If a hypothetical future card is released which uses an interposer could/would anything be done to reduce the risk of accidental damage and make it so installation of aftermarket coolers is easier and relatively risk-free? For example is a heatspreader an option, or would that have an unacceptable effect on thermal performance?

Silicon interposers are indeed delicate, but even a heatspreader or shim would compromise thermals. Best just to be careful. 🙂
//EDIT: And not change your coolers because that would violate your warranty and my attorneys would want me to remind you of that.

Thank you for doing this AMA, lots of interesting answers so far!
1. What plans are there for future improvements to Eyefinity?
2. Has AMD considered making a “display output” GPU? As in a very weak GPU (like the R5 240) intended only to drive a lot of high-res displays showing flat images, video and other “non gaming” loads? A lot of those low-end GPUs currently have very few outputs and have VGA out but not DP at all. I’d much rather buy a dedicated “display” card than having to buy two DP MST hubs. In short, I want an easy way to get more DP outputs that doesn’t cost an arm and a leg.

1) I think we’re pretty happy with how Eyefinity has shaken out. The last “big” think I wanted our team to tackle was PLP support, which there’s hardware for in some of our newer GPUs. Beyond that, I can’t see much else to add but I’m open to suggestions.
2) An interesting idea and I see your point, but the vast majority of requests for this type of card are for digital signage, which falls into the FirePro bucket. It’s also possible for AIBs to design these sorts of boards for Radeon, which is our preference. Example.

Just wanted to say great job with the consistent GPU driver updates! Kudos to the team! Any plans to completely integrate all the GPU driver options into Crimson away from the original Catalyst GUI?

Thanks. I’m pretty proud of the direction Radeon Software has gone. It is our intention to ultimately replace all of CCC with Radeon Software, but that will come in phases.

Also, have you tried any VR games yet? And, how large of an impact do you think Polaris will have on VR? And… are you able to give any information on when the development of Polaris started? (A more specific answer would be nice, but a general time frame would suffice if at all possible)
I’ve tried a few VR demos, but I only tend to encounter VR when I’m at tradeshows and busy beyond belief. I enjoy the experience a lot, but don’t think I’m personally ready to wear a headset at home. 🙂

As for what Polaris will do to VR, you’ll have to wait and see!

Fury X2, Whatever happened to it? Any word on release date?
The product schedule for Fiji Gemini had initially been aligned with consumer HMD availability, which had been scheduled for Q415 back in June. Due to some delays in overall VR ecosystem readiness, HMDs are now expected to be available to consumers by early Q216. To ensure the optimal VR experience, we’re adjusting the Fiji Gemini launch schedule to better align with the market.

Working samples of Fiji Gemini have shipped to a variety of B2B customers in Q415, and initial customer reaction has been very positive.

Have you had any chance to use or play with Polaris personally? If so, how was it and are you excited for it? (The answer can be vague or specific)
Yes, but only the demos that were also shown to media. I am really freakin’ excited about it. It’s been 4-5 years since the last big node jump, and I’m thrilled that we’ve taken advantage of that to update every functional IP block in the ASIC.

I also wanted to let you know I buy AMD/ATi for ethical reasons, such as FreeSync, GPUopen, and essentially allthese… various… reasons. I love that Richard Huddy wasn’t afraid to lay out the facts in all their gory detail regarding the BS nVidia and Intel have been pulling over the years. People need to know who and what they’re supporting with their voting dollars, and what kind of ethical or unethical business practices they’re advocating or condemning, and factor that into their purchase criteria.

As long as you keep being the Good Guys, you’ve got a life long customer. Thanks for not being evil. 🙂

I personally and professionally believe very strongly in open standards and transparent source code. At the end of the day, as a gamer myself, I want to sit down and play a game that just bloody works. When the industry screws around with black boxes and other janky efforts, my games don’t run well, and I’m not some special snowflake that doesn’t get pissed off when my games don’t run well.

I hope I’m not late, but will Hawaii gpus in Linux get vulkan support? I’ve heard that Hawaii has experimental build in amdgpu atm
Anything that’s in amdgpu will be covered by the Vulkan user mode driver, as far as I am aware.

1. Will you ever do another promotion again with Cloud Imperium Games for Star Citizen? This got me back into buying AMD Cards
2. What performance gains can we see with the Polaris line of GPUs over the current gen Radeon Cards?
3. Are there any plans with future CPUs and APUs to do a Tick / Tock Release cycle similar to how Intel does their release cycles?

1) Game bundles come and go. I can’t promise we’ll do another Star Citizen bundle, but we are always looking at bundle opportunities like we recently started with Hitman.
2) Currently we have projected a 2x performance per watt jump over existing hardware.
3) I am not in the CPU business and do not know.

Do you expect interposers to experience a moores law like improvement trend?

This is one of my favorite questions on the thread. In fact, interposers are a great way to advance Moore’s law. High-performance silicon interposers permit for the integration of different process nodes, different process optimizations, different materials (optics vs. metals), or even very different IC types (logic vs. storage) all on a common fabric that transports data at the speed of a single integrated chip. As we (“the industry”) continue to collapse more and more performance and functionality into a common chip, like we did with Fiji an the GPU+RAM, the interposer is a great option to improve socket density.

I’ve seen speculation of a possible far-future arrangement where the inherent difficulty of making big chips on super-small processes is combated by having one or two different designs of very small chip combined in great numbers on an interposer – could that actually become a reality? I vaguely remember reading a white paper that touched on the matter. Is there much you can say that isn’t sensitive?

Yes, it is absolutely possible that one future for the chip design industry is breaking out very large chips into multiple small and discrete packages mounted to an interposer. We’re a long ways off from that as an industry, but it’s definitely an interesting way to approach the problem of complexity and the expenses of monolithic chips.

My question was more about whether the interposers themselves would experience an exponential increase of a feature. I did read or watch something about stacking allowing more specialized processes for each part of a circuit.

Ooooh, I see. Well, interposers right now are “dumb” in the sense that they’re basically just silicon motherboards.

I totally get designing a mobile-specific APU given the realities of the current market. I don’t really understand only selling it into mobile though. With their low TDP, mobile oriented parts seem great for very small form factor and HTPC applications. Could you elaborate on the decision not to sell these outside of mobile applications or is that getting into sensitive material?
Because we already have desktop BGA or socketed chips like Kabini, Temash, Beema, Mullins that would suit an HTPC just fine. And small form factor, too.

Where would performance be if we got the 20nm node?

Are there any features besides dp 1.3 that have been cut because of the node collapsing?

Anything on the horizon that changes the way games are played like eyefinity did from a feature standpoint?

I don’t know. 20nm node was never designed for large, high-performance chips like a GPU. It’s hard to model something that could never have been.
HDMI2 got cut because of the node collapse.

Okay, everybody. As it is 5:20 PM, here, it’s time for me to sign off and work on some other tasks that I needed to wrap up today.

First, I really appreciate the opportunity Tizaki and the /r/amd mod team + community arranged to do an AMA. I’ve always wanted to do an official one, and I can finally check that off my bucket list. SUPER EXCITING FOR ME.

Secondly, I know that I could not get to all of the Polaris architecture/SKU/pricing questions that people had. Right now those things are protected by strict NDAs, and I rather like keeping my job. 😉 Even so, you guys deserve answers to your good and hard-thought questions. Rest assured that I will be back with another official AMA to answer those questions as we get closer to the Polaris release mid-year. That’s the right thing to do!

Gengar is best Pokemon. Shower daily. Be yourself. Brush your teeth. Play Rocket League. And remember to have fun. <3

About Author