This week’s trip down GPU memory lane is all about the NVIDIA 8 series of graphics cards, a series that marks the arrival of DirectX 10 and a wholly new GPU architecture. Arriving in late 2006, the NVIDIA 8 series remains a fondly remembered era for many enthusiasts and of course overclockers, especially the GeForce 8800 GTX a card that is still a topic of conversation with some retro-minded HWBOT members today. Let’s take a look at the hardware associated with the GeForce 8 series era, the technology and features that arrived at that time, and some of the scores and submissions that were made using popular GeForce 8800 GTX card.
The NVIDIA 8 series was officially launched on November 8th 2006 with the arrival of a new flagship card, the GeForce 8800 GTX. The card presented a new GPU to the world’s media, the NVIDIA G80, an entirely new design based on the Tesla architecture. The GPU itself was manufactured using a 90nm process, packed a groundbreaking 681 million transistors into a die measuring 484mm². The G80 was designed specifically with DirectX 10 in mind, taking advantage of many of the specific technologies and ideas introduced by Microsoft. One such feature is the implementation of unified shaders.
Today’s trip down GPU memory lane is all about the NVIDIA 7 series that arrived on the scene in June 2005. Where previous GPU designs had heralded major innovations and the introduction of entirely new technologies, the 7 series was more of an update by comparison. The new GPU arrived with a change in nomenclature and notably a change in the way that NVIDIA graphics cards were actually launched - NVIDIA and AIB partners had products shelves on the very same day that the press embargo was lifted. Let’s look at the GPUs and cards that arrived as part of the new 7 series launch, the cards that have since proved to be popular with overclockers on HWBOT and of course, the notable scores that grace our database to this day.
The NVIDIA GeForce 7800 GTX was launched on June 22nd 2005 as the company’s brand new flagship card offering. At launch the card was immediately available in the retail channel, literally the same day, which at the time was largely unheard of. This was seen as NVIDIA more or less giving ATI the proverbial finger, as previous ATI launches had tended to be prefaced with vague ‘coming soon... we hope’ messaging. The 7800 GTX was based on the G70, the successor to the NV4x series that had powered the GeForce 6 series. The change in naming scheme was apparently a marketing decision with GeForce 7 being better represented by G70 than NV47. The NV70 was largely based on the same architecture as the previous generation and the NV30 generation that preceded it. The G70 again used Shader Model 3.0 with support for the DX9.0c and OpenGL 2.1. Nothing new there. The real interest is when you consider the rendering configuration.
Welcome back to another episode in our GPU Flashback Archive series. Following on from last week’s look at the GeForce FX series, we turn our attention to its successor, the NVIDIA GeForce 6 series. After rising to a position of relative dominance in the early years of GPU design, the GeForce 4 and subsequent FX series had seen NVIDIA lose ground to ATI who had stolen a march with their highly popular Radeon 9000 series. The stage was set for a return with the launch of a new GPU design and a series of cards that required more space in your rig and additional power to deliver a truly next generation gaming experience. Let turn our minds back to 2004 and check out the technologies and features that debuted with the GeForce 6 series, plus the most popular cards of the era and the most notable scores that have been submitted here on HWBOT.
The NVIDIA GeForce 6 series arrived in tech reviewers hands in April of 2004, debuting with a new NV40 GPU and two graphics card models, the GeForce 6 Ultra which commanded a price of $499 USD, and the GeForce 6800 (often referred to as the non-Ultra) for $299 USD. Let’s first consider the GPU itself, the NV40.
This week’s GPU Flashback Archive article continues with a look at the NVIDIA GeForce FX series, or GeForce 5 if you prefer to keep things somewhat tidier. In truth however, the FX series was perhaps one of the least tidy product launches that NVIDIA have produced. The GeForce FX series spanned two years in terms of graphics card releases, used a total of six different GPU designs, two manufacturing nodes, three bus interfaces and technically speaking three different kinds of memory. To keep things reasonably simple, we’ll look at the new features that the FX series debuted and the technologies that were introduced while at the same time keeping our remit in focus with a look at the launch flagship GeForce 5800 Ultra, and the budget GeForce FX 5200, the most popular FX series card with HWBOT members.
The NVIDIA FX series replaced the previous generation of GeForce 4 series cards, at least in terms of product launch dates. In reality the two products overlapped during the period towards the end of 2002 and early 2003. Although the GeForce 4 series was a success, bringing the video game industry the hardware needed to make DirectX 8.0 a reality, the GeForce 4 MX series had left a sour taste in the mouth of many tech reviewers and hardcore gamers. Despite being branded as a fourth generation NVIDIA product, it entirely lacked DX8 compatibility.
Today our GPU Flashback Archive series continues with a look at the GeForce 4 series that arrived on store shelves back in early 2002. It was historically another successful product launch from NVIDIA, one that helped to consolidate the company’s position as basically one of two GPU vendors that remained in existence. The GeForce 4 series arrived with a slew of new features and a broad range of price point options, strengthening NVIDIA’s position as market leader. Let’s take a look at the technologies and innovations that arrived with the GeForce 4 series, the cards that were popular with HWBOT members and some of the notable scores that we can glean from the database.
At the heart of the GeForce 4 series we have a wholly new GPU design, the NV25, a GPU which offered significantly improved performance over the previous NV20 GPUs used by the GeForce 3 series. It arrived in February 2002 with the launch of three new high-end cards, the flagship GeForce Ti 4600, Ti 4400 and the Ti 4200 which arrived a few months or so later. These three cards were essentially replacing the previous generation GeForce 3 Ti 500 and Ti 200 cards, which by early 2002 were becoming pretty rare due to stock shortages.
This week’s GPU Flashback Archive article is all about the GeForce 3 series of graphics cards from NVIDIA, a company that by this stage in history was recognized as industry leader in GPU development and innovation. The third iteration of its GeForce brand launched with a hiccup or two in early 2001 and enjoyed status as the company’s top tier offering for around a year before it was usurped by its successor, the mighty GeForce 4 series. Let’s take a peek at the new technologies and innovations that arrived with GeForce 3, the cards that proved to be most popular with overclockers on HWBOT and of course, the notable scores and benchmarks that it spawned.
First let’s set the scene. NVIDIA’s arrival on the graphics card market in the late nineties had been wholly disruptive. After TNT and RIVA series cards, NVIDIA blew the doors of the industry with its first GeForce series and simply didn’t look back. By the time we arrive at the GeForce 3 series, we find that Matrox had left the market to focus on more niche markets while S3 Graphics were basically clinging on by their front teeth. NVIDIA eventually put an end to 3dfx and their classic Voodoo cards by buying the company out. Only ATi endured, and we all know what eventually happened to them.
We return for our next episode of the GPU Flashback Archive with another classic graphics platform from NVIDIA, the GeForce2 series. It was unleashed on the scene in early 2000 and proved conclusively that NVIDIA had become the number one graphics company on the planet. Let’s take a look at the GeForce2 series as a whole, the cards that were popular at the time and of course a few of the scores that have been submitted to the HWBOT database using GeForce2 cards.
With the launch of the NVIDIA GeForce 256 card series in late 1999, the company had truly announced its presence on the graphics card market. Competing cards from ATI, S3, Matrox and 3dfx could not compete with the GeForce 256 DDR. Based on the NV10 GPU, it was the first to offer a hardware solution for T&L (Transform and Lighting) tasks, offer fastest ever vertex shading and probably the best gaming experience that anyone could imagine. NVIDIA stayed true to their core company identity and continued to follow a pretty aggressive product launch cadence. The GeForce brand was expanded to include the GeForce2 series just six months later, in sharp contrast to the release schedule the company keeps today.
Our GPU Flashback Archive series continues today with what can only be described as a pivotal moment in GPU history. The NVIDIA NV10 was in fact the first chip to be called a GPU, a term coined by NVIDIA themselves back in 1999. Let’s take a look at the chip itself and the two cards that were produced using it, plus a few of the more notable score submissions that have been made using first ever generation of NVIDIA GeForce branded cards.
The NVIDIA RIVA series put the company firmly on the graphics card map, proving that the silicon they were producing could compete with offerings from other companies. It’s important to remember also that at the end of the nineties, you could purchase a card from one of several companies including ATI, S3, Matrox and Voodoo. NVIDIA as we all know would go on to become leader of the GPU market and one of the most successful companies in the industry. The direction taken with the first GeForce-branded GPUs, the GeForce 256, reflects NVIDIA’s bold and ambitious approach as a company generally. The GeForce 256 was unique, offloading geometric calculations to a specific engine while also increasing the amount of fixed pixel pipelines. The outcome was the first Direct3D 7-compliant card, one that offered a genuine leap in 3D gaming performance.
Having exhausted most of history’s CPU platforms and motherboards, this week we are launching a new series of historical articles that focus on Graphics Cards, GPUs and 3D benching. The series kicks off with arguably the first successful, commercial GPUs from industry leader Nvidia, the Nvidia RIVA series. Join us as we take a look at the technologies that arrived with the RIVA series of graphics cards, the most popular cards that have been used by overclockers on HWBOT and also a few of the more notable score submissions that have been made using Nvidia RIVA cards.
The Nvidia RIVA 128 graphics chip (codenamed the NV3) was the first version of the RIVA GPU series. It arrived on the scene in April of 1997 and was arguably the company’s first ever commercially successful graphics processing unit. The RIVA 128 was actually a departure from the very first Nvidia GPU series, the ST-G-2000 (NV1) being the first GPU on the market from Nvidia that could manage both 2D and 3D video acceleration. Unlike its predecessor the Nvidia RIVA was designed specifically to accelerate rendering of Direct3D 5.0 and OpenGL 1.0 API workloads.
The RIVA 128 was fabricated on the 350nm manufacturing process, supported both PCI and AGP 2x interfaces and arrived with the GPU clocked at 100MHz with 4MB of SGRAM (Synchronous graphics RAM) also clocked at 100MHz with a memory bus width of 128-bits. Cards based on the RIVA 128 GPU were able to rival equivalent offerings from industry leader Voodoo.
The final article in our Motherboard Memory Lane series brings us right up to date with a look at the current AMD AM4 platform. AM4 series motherboards support AMD Zen architecture CPUs, a new platform which AMD hoped would finally elevate the company back into the upper-mainstream PC component ecosystem, a place that had been utterly dominated by Intel for most of the last decade. The platform arrived with a new socket, new chipset series, new AMD Ryzen CPUs and a newly invigorated sense of purpose. Let’s take a look at the key platform features, the motherboards that are currently most popular and the CPUs that are being used to make some very decent scores on the HWBOT database.
The first systems to use the AMD AM4 Socket were in fact built by OEMs HP and Lenovo in late 2016 who were given exclusive access to the new platform. It arrived with Bristol Ridge-based APUs that featured Excavator cores, the last iteration of AMD’s Bulldozer CPU architecture. As far as the mainstream DIY PC consumer and enthusiast space, it barely registered a blip on the radar. We were all far too preoccupied with waiting for Zen to arrive.