In addition to the Ryzen 5 5600X 6 Core CPU, AMD is also readying the Ryzen 5 5600 which aims to be one of the best mainstream gaming chips under $250 US. The report comes from a China-based tech outlet (via Harukaze5719) which reports that the chip is expected to launch sometime in 2021.
AMD Readies Ryzen 5 5600 6 Core CPU For Around $220 US, Ryzen 5 5600X 6 Core Offers Better Gaming Performance Than the Intel Core i7-10700 8 Core CPU
First up, the report talks about the AMD Ryzen 5 5600X which is a CPU that has been made official by AMD on 8th October. The AMD Ryzen 5 5600X features 6 cores and 12 threads with a base clock of 3.70 GHz and a boost clock of 4.60 GHz. Compared to the Ryzen 5 3600XT, the CPU has a 0.1 GHz lower base clock but a 0.1 GHz higher boost clock. The Ryzen 5000 CPUs are also based on the new and improved Zen 3 core architecture featuring a more efficient 7nm process node from TSMC. It’s still based on the chiplet design and comes with a 12nm I/O die.
The CPU features a 35 MB of total cache (L2 + L3) and will feature a single CCD (Core Complex Die). The CPU will come with a 65W TDP and include a boxed cooler in the package with a retail cost of $299 US, available on the 5th of November.
During its official presentation, AMD compared the Ryzen 5 5600X to the Intel Core i5-10600K, both of which have a retail price of $299 US. The Ryzen 5 5600X is said to offer 19% better single-thread performance per dollar, 20% better multi-threaded performance per dollar, and also 13% better gaming performance (1080p) per dollar values.
The source however says that the Ryzen 5 5600X has enough muscle to even tackle the 8 core Core i7-10700 which retails for $50 US more and has higher clocks up to 4.8 GHz and even higher power limits than the Ryzen 5 5600X. The Ryzen 5000 processors also have overclocking enabled which is only available on the “K” unlocked SKUs from Intel. Intel’s 400-series boards do feature power limit adjustments to allow for higher base clocks but that’s not proper overclocking as some users seem to call it these days. Overall, the Ryzen 5 5600X’s $299 US price seems to be justifiable if the figures in gaming turn out to be true.
There’s also the AMD Ryzen 5 5600 which is said to launch early next year. The CPU will feature 6 cores and 12 threads and slightly lower clocks than the Ryzen 5 5600X. The CPU will feature a TDP of 65W and is said to hit a retail price of $220 US. The Ryzen 5 5600 will most definitely feature OC support making the performance figures end up close to the Ryzen 5 5600X.
Recently, this article was posted, but I couldn’t find the post’s source. 😭 My search ability is still low…
If that’s the case, then the Ryzen 5 5600 has the potential to become the best mainstream chip for 2021, outpacing the Ryzen 5 3600/3600X which are some of the most popular mainstream gaming processors out in the market today. We will keep you posted if we hear about more Ryzen 5000 SKUs.
A few months back, AMD released a statement saying that they will support their Ryzen 5000 CPUs on older 400-series motherboards (X470 & B450). While X570 is getting the first round of support, it seems like some board partners would still be focusing more towards their 500-series lineup than 400-series as hinted by ASUS.
ASUS Not Supporting AMD Ryzen 5000 CPUs On High-End ROG X470 Motherboards, Only X570 Offer Support
The story comes from AMD’s subreddit where a user contacted ASUS to ask for the support of AMD Ryzen 5000 CPUs on his motherboard. The motherboard in question is the ASUS ROG Crosshair VII HERO which is based on the X470 chipset. In their reply, ASUS says that according to its engineers, they have no plans for the Crosshair VII HERO to support the Ryzen 9 5900X CPU. The company rep even goes off to advise the user to purchase an X570 or B550 chipset-based motherboard that will support the Ryzen 5000 series CPUs.
“I am writing this email to provide you an update about your ongoing case. According to our engineers, We have no plans for the Crosshair VII Hero to support the Ryzen 5900X, please purchase Crosshair VIII Hero and any Ass (ASUS*) B550 motherboard that will support Ryzen 5900X and 5000 series processors.” via Reddit
For one, there’s no reason why the 400-series chipset motherboards or even the ROG Crosshair VII HERO from ASUS shouldn’t support AMD’s Ryzen 5000 CPUs. AMD has already provided its board partners with the necessary code to enable support so unless ASUS wants to push their 500-series first, excluding ROG Crosshair VII HERO (X470) from the support list doesn’t make any sense at all.
The other reason could be that ASUS is simply trying to suggest no support at the launch of the CPUs which is officially confirmed by AMD. In its support slide, AMD has stated that 500-series motherboards will be ready for Ryzen 5000 CPUs at launch while 400-series motherboards will be receiving first beta releases in January which is at least two months after the Ryzen 5000 launch.
We remain hopeful that ASUS opens up support for its high-end ROG X470 motherboards but the decision is theirs to make and not AMD. AMD has done its job by providing the code to partners so its mostly an AIB decision whether they want their full X470 to support Ryzen 5000 CPUs or just a few select variants. We will keep you update regarding the support plans for 400-series boards of various AMD partners.
References to Intel’s next-generation Meteor Lake line of CPUs have been spotted in the latest Linux patches by Phoronix (via Videocardz). The future CPU lineup was mentioned in the Linux 5.10 patch release.
Intel Meteor Lake CPUs Spotted in Linux Patches, Will Feature Brand New CPU & GPU Cores on Next-Gen Process Node
The Intel Meteor Lake line of CPUs is a far-future family that will appear sometime in 2022. The new line of processors will succeed Intel’s Alder Lake family which will make its debut in the second half of 2021. The CPU family is expected to make use of next-generation core technologies and feature a brand new process node but before that, let’s see what details the Linux patches unveil for Intel’s Meteor Lake.
Intel is planning to add support for Meteor Lake in phases and the first comes in the form of the Intel e1000e Linux driver which is being extended to support the Meteor Lake client platform, as reported by Phoronix. The Intel e1000e references to the Gigabit Ethernet driver so from the looks of things, there will still be 1Gbe LAN support for Meteor Lake or it could just be used for testing the early variations of the silicon. Intel already offers 2.5 Gbe LAN support on its existing platform so it is likely that they will retain and move to something even better in the coming gens, e.g. 5 Gbe and beyond.
Intel Desktop CPU Generations Comparison:
Intel CPU Family
Processors Cores (Max)
PCIe Gen 2.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 3.0
PCIe Gen 4.0
PCIe Gen 4.0?
PCIe Gen 4.0?
As for what Intel’s Meteor Lake family is going to offer, the family is expected to feature support on the LGA 1700 socket which is the same socket used by Alder Lake processors. We can expect DDR5 memory and PCIe Gen 5.0 support. Other than that, we can also expect Intel to feature brand new core technologies based on advanced (next-gen) process node for their Meteor Lake family.
Intel’s Alder Lake CPUs will feature a mix of Golden Cove and Gracemont Atom cores so it is highly likely that Meteor Lake will offer the next-generation Ocean Cove core architecture along with an enhanced Atom core architecture. Previous rumors have indicated that Ocean Cove could bring up to 80% IPC improvement over Skylake architecture.
Intel CPU Generational IPC Chart (Rumor):
Considering that Alder Lake CPUs will be based on the 10nm SF (SuperFin) process node, it is likely that Intel’s 2022 lineup will be making full use of the brand new 7nm SF (SuperFin) design. Though we cannot say for sure since Intel will also be outsourcing some of its chip orders to 3rd parties such as TSMC but those are mostly GPUs for now. Once again, the Meteor Lake lineup launches in 2022 which is still a long way to go. AMD is also expected to introduce its next-generation Zen 4 based processors for its own consumer platform around 2022 which should be the direct rival to Meteor Lake.
Drama ensued as Gigabyte launched its first AORUS branded GeForce RTX 3080 graphics card yesterday in the retail market. The card went live on Newegg but just 10 minutes after, the card was sold out entirely with Gigabyte putting the blame on the messy handling of the situation by Newegg.
Gigabyte’s First AORUS Branded RTX 30 Series Card, The AORUS GeForce RTX 3080 Master, Sold Out Within 10 Minutes on Newegg
While Gigabyte tried to provide a clarification for the broken launch, users who were waiting to purchase the card were riled up over the fact that they had to wait so long to get hands on the cards only to see it show up as “Out of Stock” by the time they got to place on order for it. The mess occurred when AORUS’s community manager at the official NVIDIA Reddit shared what ended up to be a broken link with the community.
As per Brian (AORUS Community Manager), the link was supposed to redirect users to the AORUS GeForce RTX 3080 Master graphics card page over at Newegg but when the launch embargo lifted, the link didn’t work & simply gave an error. 10 minutes after the launch, the link was replaced by Brian with the proper working one but by that time, the entire inventory of the AORUS Master RTX 3080 graphics card was sold out. Following is what Brian had to say:
“Sorry guys this was a mess of a launch with a lot of last minute poor communication between us and Newegg. I was notified a minute before “launch” that the link they gave us was broken and did not work. I wish there was a better way we could have done this ,I had no control of the situation other than the information that was given to me. I hope you’ll understand, I totally understand your frustrations and being upset with us. Trust me, I am equally frustrated and upset with the situation.” – via Reddit
Soon after the launch, Gigabyte’s and NVIDIA’s official subreddits had to be locked down. It is reported that Gigabyte’s employees received threats from Reddit members (which is completely unnecessary) over the botched launch of the AORUS RTX 3080 Master graphics card and as such, the subs had to be cleaned.
There’s obviously a lot of things that can be deduced from this launch and it isn’t solely the low stock of these GPUs. Once again, Newegg had confirmed Gigabyte that they had taken measures that would help them monitor each individual order live during the launch time frame to prevent bots (which was one major issue with the RTX 3080 launch) but it looks like scalpers still managed to get hands on these cards and have already listed them on eBay where they are listed for over $2000 US.
Hence, it looks like legitimate customers were once again pushed away from getting the cards and the Newegg system isn’t really an effective measure to handle scalpers. There’s no word on when the new stock for the AORUS GeForce RTX 3080 Master graphics card arrives but Brian did say that the card is expected to hit the EU region by next week on 10th October while Canadian residents can expect the card in 2-3 weeks. Nevertheless, it does look like the AORUS had a limited stock of its new graphics card which wasn’t enough to fulfill all users. We may see stock being replenished in phases but for now, the card has all stock sold out.
Aside from the whole launch drama, Gigabyte also confirmed some interesting details of their RTX 3080 MASTER graphics card. According to the company, the Master variant will ship with a TGP of 380W while the GeForce RTX 3080 Vision OC will ship with a TGP of 370W. The prices of the AORUS GeForce RTX 3080 Master is set at $849.99 US while the Vision OC variant will retail at $769.99 US. The Vision OC is expected to hit Newegg on 9th October at 12:00 AM PST/PDT but given yesterday’s launch, expect this timing to change.
NVIDIA’s GeForce RTX 3060 Ti custom graphics cards from Gigabyte have been listed by EEC where they were submitted by the manufacturer for certification. This confirms that NVIDIA and its board partners are getting ready to unveil a fourth GeForce RTX 30 series graphics card next month that will target the mainstream segment.
NVIDIA GeForce RTX 3060 Ti 8 GB Custom Graphics Cards Listed by EEC – Four Gigabyte Models Featuring AORUS Master, Gaming, & Eagle Series
The Gigabyte models that were listed at EEC were spotted by Komachi_Ensaka (via Videocardz). It looks like Gigabyte is working on at least four models which will be part of its GeForce RTX 3060 Ti series lineup. The custom models are listed below:
GIGABYTE RTX 3060 Ti 8GB AORUS Master (GV-N306TAORUS M-8GD)
GIGABYTE RTX 3060 Ti 8GB GAMING OC (GV-N306TGAMING OC-8GD)
GIGABYTE RTX 3060 Ti 8GB EAGLE OC (GV-N306TEAGLE OC-8GD)
GIGABYTE RTX 3060 Ti 8GB EAGLE (GV-N306TEAGLE-8GD)
As you can see, the GeForce RTX 3060 Ti is more or less confirmed now and the card will feature 8 GB of GDDR6 memory. Gigabyte will have the AORUS Master variant as its top model while the OC and standard variants of the Eagle and Gaming series will serve the gaming market at a lower price point.
As per previously leaked out specifications, the NVIDIA GeForce RTX 3060 Ti graphics card will use the GA104-200 GPU core and the latest PG190 SKU 10 PCB design (Reference & Founders Edition). The card will feature 4864 CUDA cores arranged in 38 SMs along with 8 GB of GDDR6 memory that will be operating at 14 Gbps across a 256-bit bus interface to deliver a total bandwidth of 448 GB/s. The graphics ard is rumored to feature a TDP of around 180W.
NVIDIA GeForce RTX 30 Series ‘Ampere’ Graphics Card Specifications:
Graphics Card Name
NVIDIA GeForce RTX 3060 Ti
NVIDIA GeForce RTX 3070
NVIDIA GeForce RTX 3080
NVIDIA GeForce RTX 3090
TMUs / ROPs
272 / 96
Tensor / RT Cores
152 / 38
184 / 46
272 / 68
328 / 82
8 GB GDDR6
8/16 GB GDDR6
10/20 GB GDDR6X
24 GB GDDR6X
Price (MSRP / FE)
There’s no word on the availability of the graphics card yet but previous reports have highlighted at a late October launch for the graphics card so we might get to hear about the card around the time the GeForce RTX 3070 is launched. As for pricing, considering that the GeForce RTX 3070 costs $499 US, the GeForce RTX 3060 Ti could be priced around $349-$399 US while offering performance on par or faster than the GeForce RTX 2080 SUPER.
MSI has just announced the release of the AMD Combo PI V2 22.214.171.124 BIOS Firmware which adds offers improved support and compatibility with existing and next-generation Ryzen processors. AMD X570 chipset motherboards will be receiving the BIOS update first followed by B550 & A520 chipset-based motherboards.
MSI Rolls Out AMD Combo PI V2 126.96.36.199 BIOS Firmware For Its AM4 Lineup, Optimized Compatibility With Existing & Next-Gen Ryzen CPUs
According to MSI’s blog post, the AMD Combo PI V2 188.8.131.52 BIOS update will release in four phases A total of 10 MSI X570 motherboards will be receiving the BIOS update starting today while B550 & additional X570 motherboards will be added to the support list by the middle of October. MSI will also have the BIOS Firmware ready for its A520 series motherboards by end of October while MSI plans to phase out the BETA release with the official MP BIOS starting in early November.
This will be around the same time when AMD’s next-generation Ryzen 5000 “Vermeer” CPUs based on the new Zen 3 core architecture are launched. As for features of the new Combo PI V2 184.108.40.206 BIOS Firmware, you can read them below:
Optimized compatibility for AMD Ryzen 3000-Series and Ryzen 4000 G-Series Desktop Processors and future AM4 socket processors
Solve some specific OC failure issues
Update SMU module
Optimized DDR4 memory overclocking solution
MSI X570/B550 Motherboards With AMD Combo PI V2 220.127.116.11 BIOS:
Aside from just offering optimized compatibility for existing and next-generation AMD Ryzen CPUs, the new BIOS Firmware also improves overclocking support, specifically DDR4 OC. There is also a range of fixes including some overclocking specific failures that users might have encountered when performing manual overclocks. We will keep you updated as more information is revealed regarding support for AMD’s next-generation Zen 3 based Vermeer CPUs for 500-series boards. AMD will officially be lifting the curtains of its Zen 3 Ryzen 5000 CPU lineup on the 8th of October that comes next week.
1usmus’s highly anticipated ClockTuner Utility for AMD Ryzen 3000 CPUs is now available for download. The new tool not only aims to help deliver increased performance for Zen 2 Ryzen owners but also improves efficiency by reducing the power draw of Zen 2 based processors.
1usmus’s ClockTuner Utility For AMD Ryzen 3000 CPUs Now Available To Download – Free Performance Boost For All Zen 2 CPU Owners!
Unveiled back in August by 1usmus, the ClockTuner Utility for AMD Ryzen CPUs is specifically has been designed to increase the performance of Zen 2 based processors that include Ryzen 3000 & 3rd Gen Ryzen Threadripper CPUs without increasing the power consumption. The utility can be downloaded at the following link:
AMD Ryzen processor with Zen 2 architecture (Renoir is temporarily not supported);
BIOS with AGESA Combo AM4 18.104.22.168 (and newer);
.NET Framework 4.6 (and newer);
CPU Voltage – Auto (BIOS);
CPU Multiplayer – Auto (BIOS);
SVM (Virtualization) – disabled (BIOS, optional);
Spread Spectrum – disabled (BIOS, optional);
Ryzen Master 2.3 (uses the driver for monitoring);
Stable RAM overclocking or stable XMP.
The next set of requirements are also mandatory and apply to UEFI (BIOS) settings. Since the success of CTR depends heavily on the capabilities of the motherboard VRM (highly recommended to read this chapter – link), we need to make a kind of foundation in the UEFI (BIOS) to secure the process of tuning the system from BSOD.
The most important setting is LLC (Load Line Calibration), my recommendations are as follows:
ASUS – LLC 3 or LLC 4;
MSI – LLC 3;
Gigabyte – in most cases Turbo, but it can also be Auto;
ASRock is Auto or LLC 2; Importantly, CTR is mediocre compatible with ASRock motherboards, as all LLC modes show abnormally high Vdroop;
Biostar – Level 4+.
It is recommended to use additional settings for ASUS motherboard owners:
Phase mode – Standard;
Current capability mode – 100%;
How to install CTR
Download the CTR archive (download) and unpack it in a convenient place for you.
Download Cinebench R20 (download) archive and extract the archive contents to the “CB20” folder (this folder is located in the CTR folder).
Run Cinebench R20, accept the license agreement, close Cinebench R20.
How the tool achieves this is quite complicated but 1usmus’s CTR is fully automated so users don’t have to worry about anything. It is explained that the utility will increase the overall performance of AMD’s Zen 2 processors, specifically those that feature a chiplet based design, by undervolting each of the individual CCX modules. By performing an undervolt on each CCX, Zen 2 processors are shown to not only run faster but also cooler. This also drops the overall power consumption while the CPU can retain an active-state for all the energy-saving technologies.
Another key feature of CTR is Smart Overclocking which evaluates the quality of each CCX & adjusts the frequencies individually. A special preset of Prime95, also developed by 1usmus, is embedded within the utility which evaluates the quality of each CCX. An algorithm has been designed which fine-tunes the frequency for a balanced operation for all CCX’s without shifting the load on the CPU nodes (modules).
Prime95 isn’t the only evaluation software embedded within CTR. 1usmus has also featured a plug-in test package of Cinebench R20 which not only evaluates the overall performance of the tuned CPU but also shows the CPU voltage and power consumption as a part of the efficiency tests that have been achieved while running the built-in Cinebench R20 benchmark.
1usmus’s has stated that CTR will be free for everyone to use and offers better performance than most overclocking utilities or automated boosting techniques that motherboard vendors offer that only deliver a small increment in performance while increasing TBP by 50% and sometimes even beyond that.
1usmus’s CTR (ClockTuner For AMD Ryzen) CPU Performance & Efficiency Tests
As for the results achieved with CTR, 1usmus has provided results from two Zen 2 system. One is configured with the Ryzen 9 3900X (ASUS Rog Crosshair VII Hero) and the other is running the AMD Ryzen Threadripper 3960X (ASUS ROG Zenith II Extreme). Both systems were compared at default & tuned (CTR) settings.
Compared to the default AMD Ryzen 9 3900X CPU, the tuned variant delivers a 7% increase in performance while reducing the total CPU power draw by 12.8 Watts. The CPU runs at a higher frequency while maintaining a lower voltage of 1.225V versus the default 1.312V.
The AMD Ryzen Threadripper 3960X system saw a performance uplift of 5.2% and the total power draw dropped by 12 Watts. The CPU was also running at a lower voltage of 1.25V while maintaining stable clocks versus the default chip that was configured at 1.296V.
As for compatibility, the CTR software for AMD Ryzen CPUs is supported by all AM4 motherboards. Even if your motherboard doesn’t support CCX configuration for CCX, it would still work since the low-level SMU access is able to bypass any limitations imposed by CPU or motherboard vendors. It is also specifically stated that CTR doesn’t contain any unsafe code that may be perceived by anti-virus software as dangerous for the system. This was one of the major issues with the AMD DRAM calculator for Ryzen CPUs but has been fixed with CTR.
NVIDIA’s GeForce RTX 30 series has been caught up in a major controversy ever since the lineup launched. A botched launch for both RTX 3080 & RTX 3090 graphics cards was soon followed by user reports where several cards were crashing during gaming. It was soon highlighted that the cause of these issues could be related to the GPUs boosting algorithm but more recent reports suggest that the issue could have more to do with the hardware design that AIB partners have implemented on their custom products. NVIDIA has now come forward with an official statement regarding the matter.
NVIDIA Officially Responds To GeForce RTX 30 Series Issues: SP-CAP vs MLCC Groupings Vary Depending on Design & Not Indicative of Quality
There’s more than one part to this story so let’s start with what NVIDIA has to officially say on the matter. The statement was given to PCWorld’s Senior Editor, Brad Chacos, and is as follows:
“Regarding partner board designs, our partners regularly customize their designs and we work closely with them in the process. The appropriate number of SP-CAP vs. MLCC groupings can vary depending on the design and is not necessarily indicative of quality.”
In the statement, NVIDIA specifically states that their partner cards are based on custom designs and that they work very closely with them during the whole design/test process. NVIDIA does give AIBs reference specs to follow and gives them certain guidelines for designing customized boards. That does include the limits defined for voltages, power, and clock speeds. NVIDIA goes on to state that there’s no specific SP-CAP / MLCC grouping that can be defined for all cards since AIB designs vary compared to each other. But NVIDIA also states that the number of SP-CAP / MLCC groupings are also not indicative of quality.
There have been recent reports suggesting that MLCC is the way to go with AIB cards as they offer the most stable experience but NVIDIA here is saying otherwise. The discussion of SP-CAP /MLCC groupings started when Igor Wallossek from Igor’s Lab posted in his technical report that the caps might be a possible reason behind the crashes that users were facing.
In our previous report, it was pointed that the GeForce RTX 30 series generally crashed when it hits a certain boost clock above 2.0 GHz. Some users also found out that cards with full SP-CAP layouts (Conductive Polymer Tantalum Solid Capacitors) were generating more issues compared to boards that either use a combination of SP-CAP / MLCCs (Multilayer Ceramic Chip Capacitor) or an entire MLCC design.
SP-CAP / SP-CAP & MLCC Groupings on Various GeForce RTX 30 AIB Cards (Image Credits: Igor’s Lab):
The difference between the SP-CAP & MLCC capacitors is vast and Videocardz has explained it in a simpler way as stated below:
The MLCC is cheap and small, they operate at current ratings, voltages, and temperates, but they are prone to cracking and piezo effects, they also have bad temperature characteristics.
The SP-CAP is bigger and has lower voltage ratings and is worse at high frequencies. They are however stronger and not prone to cracking, they also have no piezo effects, SP-CAPs should also operate better at higher temperatures.
Since this discovery, there have been statements from various AIBs on the matter. We also have had several AIBs who have told us that they believe that the issue is mostly a hardware flaw that they’re rectifying while others believe that it can be solved through a driver fix but that would lead to lower clocks and voltages that could slightly affect the overall performance of the card. However, that remains to be seen. The following are the statements from the AIBs:
Recently there has been some discussion about the EVGA GeForce RTX 3080 series.
During our mass production QC testing we discovered a full 6 SP-CAPs solution cannot pass the real world applications testing. It took almost a week of R&D effort to find the cause and reduce the SP-CAPs to 4 and add 20 MLCC caps prior to shipping production boards, this is why the EVGA GeForce RTX 3080 FTW3 series was delayed at launch. There were no 6 SP-CAP production EVGA GeForce RTX 3080 FTW3 boards shipped.
But, due to the time crunch, some of the reviewers were sent a pre-production version with 6 SP-CAPs, we are working with those reviewers directly to replace their boards with production versions. EVGA GeForce RTX 3080 XC3 series with 5 SP-CAPs + 10 MLCC solution is matched with the XC3 spec without issues.
Also note that we have updated the product pictures at EVGA.com to reflect the production components that shipped to gamers and enthusiasts since day 1 of product launch. Once you receive the card you can compare for yourself, EVGA stands behind its products! — Jacob Freeman, EVGA Forums
Regarding the recent RTX 3080 issue, the investigation is undergoing and we will update you shortly. For those who had purchased our RTX 3090/3080 Trinity, please submit the form at the link below. We will keep in touch with you personally.
About the SP-CAP capacitors and MLCC capacitors of GALAXY RTX 3080/3090 products
Dear player friends:
Hello, everyone. Recently, many users have come to inquire about the specific usage of the capacitors on the back of the GALAXY RTX 3080/3090 series of graphics chips. After verification, about the RTX 3080/3090 released by GALAXY. The capacitors used on the back of the model chip are as follows:
1. GALAXY RTX 3080 Heijiang/Metal Master product, the number of SP-CAP capacitors on the back of the chip: 5, the number of MLCC capacitors: a set of 10. This version is currently on sale and is the original commercial version.
2. GALAXY RTX 3090 General/Metal Master product, the number of SP-CAP capacitors on the back of the chip: 4, the number of MLCC capacitors: two groups of 20. This version is currently on sale and is the original commercial version.
3. GALAX RTX 3090 GAMER trial production samples, currently only 6 pieces are in the hands of the media and KOL. The first batch of this sample uses 6 SP-CAP capacitors. After confirmation, the GAMER products officially produced and sold will be used for capacitor materials. Make optimization improvements. Note: This product is not currently on sale.
I am very grateful to the players and friends for their support and love to GALAXY. GALAXY is also consistent in its pursuit of product quality. It is our glorious mission to provide you with better and stronger hardware. In addition, the current full range of GALAXY graphics card products support three-year warranty and personal warranty service. If you have other doubts or questions, please feel free to leave us a message to discuss, thank you! Source
Announcement on SP-CAP Capacitors and MLCC Capacitors of Gainsun 30 Series Graphics Card Products
Dear Gainward consumer players:
Thanks to the friends who bought and supported Gainward. Recently, we received the voice of market players’ inquiries. Many players are very concerned about our company’s just released 30 series products. Regarding the specific usage of the capacitors on the back of the chip, we hereby explain the situation:
All the RTX 3080 10GB graphics cards released by Gainward currently use 5 SP-CAP capacitors on the back of the chip and 10 MLCC capacitors. The versions currently on the market are all the original commercial versions.
All the RTX 3090 graphics cards released by Gainward currently use 4 SP-CAP capacitors on the back of the chip and 20 MLCC capacitors. The versions currently on the market are all the original commercial versions.
As a long-term AIC partner of Nvidia, Gainward has always been adhering to the product standard to design and produce completely according to Nvidia’s requirements. Therefore, currently players are concerned about the problem of capacitors and new product failures on the Internet. Currently Gainward has not generated such feedback. .
In addition, all Gainward graphics card products support three-year warranty and personal warranty service. Thank you consumers and players for your support and love to Gainward. Source
Were AIBs Constrained By Time & Rushed To Launch Their Custom Designs?
In the meantime, MSI and ASUS have started updating their product pages for the GeForce RTX 3090 and GeForce RTX 3080 graphics cards with retail board pictures. Previously, the product pages had pictures from the pre-production models which were still using older designs with a different SP-CAP /MLCC layout. The new ones show an increased number of MLCC caps on the back then before.
We know for a fact that several AIBs were finalizing clock speeds even after the GeForce RTX 3080 was announced and clock speeds for RTX 3080 custom variants were revealed just a few weeks prior to launch. So it could be that AIBs didn’t get the time they needed to evaluate the clock speeds extensively or even their board designs. The same is true for the GeForce RTX 3090 and GeForce RTX 3070 custom models since their clock speeds have not been finalized yet either.
Following are the GeForce RTX 3080 Gaming X Trio, GeForce RTX 3080 Ventus 3X, and ASUS TUF Gaming GeForce RTX 3080 pictures from before and after showcasing the updated layouts on the back of the PCB (Image Credits: Videocardz):
NVIDIA’s New Driver Offers A Preliminary Fix While AIBs Evaluate Their Custom Designs With The First NVIDIA Test Drivers
There have also been reports that users are facing fewer issues after installing the new GeForce 456.55 drivers compared to the older ones on the same cards. Users have stated that their cards run at lower clock speeds but are more stable and don’t spike as often and as high as they were before. So if you were one of the users facing crashes or issues with the cards, you can try out the new drivers. The cards we were running (MSI RTX 3080 / RTX 3090 Gaming X Trio) didn’t produce any issues while our tests before and the new drivers hardly impact the clock speeds of the cards.
With that said, AIBs have confirmed to me that they are indeed working with the first test drivers from NVIDIA. The 456.55 release doesn’t specifically state any fixes for the issues and only adds in fixes for a certain number of games. NVIDIA might have introduced a preliminary measure to halt the crashes for now but a more fine-tuned approach in the form of a driver is still a couple of days away. We will keep you posted within this article for any more information we get related to the RTX 30 series issues.
KINGPIN himself has given us a first look at the flagship GeForce RTX 3090 graphics card from EVGA, the GeForce RTX 3090 KINGPIN Hybrid. The graphics card features a design that is made solely for enthusiasts and overclockers. KINGPIN even demonstrated the insane overclocking potential of the card last week by breaking the 3DMark Port Royal world record with a massive 2580 MHz overclock on the same card.
EVGA GeForce RTX 3090 KINGPIN Hybrid Pictured, 360mm AIO Liquid Cooler For This Beast Of A Card
There are definitely a lot of interesting details to talk about regarding the EVGA GeForce RTX 3090 KINGPIN Hybrid. We already got to see how the card looks like when EVGA officially announced its RTX 30 series lineup but other details have been kept a secret. Well, it looks like KINGPIN just went ahead and posted the first full picture of the card itself.
The EVGA GeForce RTX 3090 KINGPIN Hybrid will be the flagship design in the GeForce RTX 30 line from the company. It features a dual-slot design with a wide PCB that is covered by the massive shroud. The wide PCB is used to accommodate more electrical components & a beast of circuitry that is used to power this card. The card features a matte black shroud with mesh grills at the front and a large 9 blade fan that pushes air through the internal assembly.
The heatsink for the GeForce RTX 3090 KINGPIN consists of a large copper block that covers the electrical components while the GPU sits underneath the copper cold-plate of the pump. The card is a hybrid design that makes full use of liquid cooling and is attached to a 360mm radiator with three 120mm fans included. This should allow for some cool operating temperatures whether running at stock or overclocked.
Another interesting feature of the EVGA GeForce RTX 3090 KINGPIN graphics card is the small LCD display which can be seen on the side by the end of the shroud. This LCD should provide some useful functionality and maybe even some tuning options to users. In teaser shorts, it can be seen that the card will feature three 8-pin power connectors featured on the back of the PCB.
Display outputs include the standard three DP and a single HDMI 2.1 ports. There’s no word on the pricing and availability yet but we can definitely see the card being a hit amongst overclockers and enthusiasts when it launches.
NVIDIA’s RTX 30 series launched to a ton of fanfare and jaw-dropping levels of performance claims and specifications – but somewhere between all the hype and third-party reviews, the promised doubling in performance vanished without a trace. Today, we are going to be investigating a very interesting phenomenon plaguing NVIDIA GPUs and why not everything is as it seems. Nothing is presented as the gospel truth for you to believe and you are encouraged to use your own judgment according to taste.
NVIDIA’s RTX 30 series has more than twice the TFLOPs, so where did all the performance go?
The argument is simple, Jensen promised twice the graphics power in Ampere GPUs so we should see roughly twice the shading performance in most titles (without any bells and whistles like DLSS or RTX). This, most curiously, isn’t happening. In fact, the RTX 3090 is anywhere from 30% to 50% faster in shading performance in gaming titles than the RTX 2080 Ti even when it more than twice the number of shading cores. TFLOPs is, after all, simply a function of shading clocks multiplied by the clock speed. Somewhere, somehow, performance is being lost.
One of three things is happening:
The lone shading core of Ampere is somehow inferior to Turing and the cards can’t actually deliver that FP32 TFLOPs number (in other words Jensen lied).
There is something wrong in the bios/microcode or low-level drivers of the card
The high-level drivers / gaming engines / software stacks can’t scale up to properly utilize the mass of shading cores present in Ampere cards.
Fortunately for us, this is a problem that we can easily investigate using the scientific method. If the Ampere cards’ shader cores are somehow inferior to Turing, then we should not be able to get twice the FP32 performance using *any* application. Simple right? If however, we can get the claimed performance on *any* application then it becomes slightly tricky. While it would absolve the hardware of any blame, we would then need to find whether the software stack/high-level drivers are at fault or whether its a microcode issue. While you can resolve hardware vs software with a very high level of certainty, you cannot do the same for the software side. You can, however, make a very good guess. Our logic flow diagram is as follows:
Rendering applications are designed to use a ton of graphics horsepower. In other words, their software is coded to scale exponentially more than games (there have actually been instances where games refused to work on core counts higher than 16 in the past). If *a* rendering application can demonstrate the doubling in performance than the hardware is not to blame. The cores aren’t inferior. If *all* rendering applications can take full advantage then the low-level driver stack isn’t to blame either. This would point the finger at APIs like DirectX, GameReady drivers, and the actual code of gaming engines. So without any further ado, let’s take a look.
VRAY is one of the most shading intensive benchmarks for GPUs. It is essentially the Cinebench for GPUs. It also helps that the program is optimized for CUDA architecture so represents a “best case” scenario for NVIDIA cards. If the Ampere series can’t deliver the doubling in performance here, it will not do so anywhere else. The RTX 3090 in VRAY achieves more than twice the shading performance of the RTX 2080 Ti quite easily. Remember our flow diagram?
Since we have a program that can actually output double the performance in a ‘real world’ workload, it obviously means that Jensen wasn’t lying and the RTX 30 series is actually capable of the claimed performance figures – at least as far as the hardware goes. So we know now that performance is being lost on the software side somewhere. Interestingly, Octone scaled a little worse than VRAY – which is slight evidence for lack of low-level drivers. Generally, however, rendering applications scaled a lot more smoothly than gaming applications.
We took a panel of 11 games. We wanted to test games on shading performance only, no DLSS, and no RTX. There wasn’t a particular methodology to picking the titles – we just benched the games we had lying around. We found that the RTX 3090 was on avg 33% faster than the RTX 2080 Ti. This means, for the most part, the card is acting like a 23.5 TFLOPs GPU. Performance is obviously taking a major hit as we move from rendering applications to games. There is a vast differential between the performance targets the RTX series should be hitting and the one its actually outputting. Here, however, we can only guess. Since there is a lot of fluctuation between various games, game engine scaling is obviously a factor and the drivers don’t appear to be capable of fully taking advantage of the 10,000+ cores that the RTX 3090 possesses.
So what does this mean? Software bottleneck, fine wine and the amazing world of no negative performance scaling in lineups
Because the problem with the RTX 30 series is very obviously one that is based in software (NVIDIA quite literally rolled out a GPU so powerful that current software cannot take advantage of it), it is a very good problem to have. AMD GPUs have always been praised for being “fine wine”. We posit that NVIDIA’s RTX 30 series is going to be the mother of all fine wines. The level of performance enhancement we expect to come for these cards through software in the year to come will be phenomenal. As game drivers, APIs, and game engines catch up in scaling and learn how to deal with the metric butt-ton (pardon my language) of shading cores present in these cards, and DLSS matures as a technology, you are not only going to get close to the 2x performance levels – but eventually, exceed them.
While it is unfortunate that all this performance isn’t usable on day one, this might not be entirely NVIDIA’s fault (remember, we only the problem is on the software side, we don’t know for sure whether the drivers or game engines or the API is to blame for the performance loss) and one thing is for sure: you will see chunks of this performance get unlocked in the months to come as the software side matures. In other words, you are looking at the first NVIDIA Fine Wine. While previous generations usually had their full performance unlocked on day one, NVIDIA RTX 30 series does not and you would do well to remember that when making any purchasing decisions.
Fine wine aside, this also has another very interesting side effect. I expect next to no negative performance scaling as we move down the roster. Because the performance of the RTX 30 series is essentially being software-bottlenecked and the parameter around which the bottleneck is revolving appears to be the number of cores, this should mean that less powerful cards are going to experience significantly less bottlenecking (and therefore higher scaling). In fact, I am going to make a prediction: the RTX 3060 Ti for example (with 512 more cores than the RTX 2080 Ti) should experience much better scaling than its elder brothers and still beat the RTX 2080 Ti! The less the core count, the better the scaling essentially.
While this situation represents uncharted territory for NVIDIA, we think this is a good problem to have. Just like AMD’s introduction of multiple core count CPUs forced game engines to support more than 16 cores, NVIDIAs aggressive approach with core count should force the software side to catch up with scaling as well. So over the next year, I expect RTX 30 owners will get software updates that will drastically increase performance.