Kurzweil Mark 10 Power Supply Schematic Pdf Free Download

Series of supercomputers by IBM

IBM Blue Gene
IBM Blue Gene P supercomputer.jpg

A Blue Gene/P supercomputer at Argonne National Laboratory

Developer IBM
Type Supercomputer platform
Release date BG/Fifty: Feb 1999 (Feb 1999)
BG/P: June 2007
BG/Q: Nov 2011
Discontinued 2015 (2015)
CPU BG/L: PowerPC 440
BG/P: PowerPC 450
BG/Q: PowerPC A2
Predecessor IBM RS/6000 SP;
QCDOC
Successor IBM PERCS

Hierarchy of Blue Factor processing units

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with depression power consumption.

The project created three generations of supercomputers, Blue Cistron/L, Bluish Factor/P, and Blueish Factor/Q. During their deployment, Blueish Factor systems oft led the TOP500[1] and Green500[2] rankings of the near powerful and almost power efficient supercomputers, respectively. Blue Cistron systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Applied science and Innovation.[4]

As of 2015, IBM seems to have ended the development of the Blue Gene family unit[5] though no public announcement has been made. IBM's standing efforts of the supercomputer scene seems to be concentrated around OpenPower, using accelerators such as FPGAs and GPUs to battle the stop of Moore'south law.[half dozen]

History [edit]

In December 1999, IBM announced a Usa$100 meg enquiry initiative for a 5-year effort to build a massively parallel figurer, to be applied to the report of biomolecular phenomena such as protein folding.[7] The project had two principal goals: to advance our agreement of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively encounter its scientific goals, how to brand such massively parallel machines more usable, and how to achieve operation targets at a reasonable toll, through novel machine architectures. The initial pattern for Blue Cistron was based on an early version of the Cyclops64 compages, designed by Monty Denneau. The initial enquiry and development piece of work was pursued at IBM T.J. Watson Inquiry Centre and led past William R. Pulleyblank.[eight]

At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more full general-purpose supercomputer: The 4D nearest-neighbour interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and information technology became known as Blueish Factor/L (50 for Light); evolution of the original Blue Gene organisation connected under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

In November 2004 a 16-rack organization, with each rack holding 1,024 compute nodes, achieved start place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS.[1] Information technology thereby overtook NEC's Earth Simulator, which had held the title of the fastest reckoner in the world since 2002. From 2004 through 2007 the Bluish Factor/L installation at LLNL[9] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS elevation. The LLNL BlueGene/50 installation held the kickoff position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the one PetaFLOPS mark. The organization was built in Rochester, MN IBM institute.

While the LLNL installation was the largest Blueish Factor/L installation, many smaller installations followed. In Nov 2006, there were 27 computers on the TOP500 list using the Bluish Factor/L architecture. All these computers were listed as having an compages of eServer Bluish Gene Solution. For instance, three racks of Blue Gene/50 were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a unmarried benchmark application, Linpack, Blueish Gene/L too gear up records for performance on a wider set of applications. Blue Gene/L was the first supercomputer always to run over 100 TFLOPS sustained on a real-globe application, namely a iii-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metallic nether high pressure and temperature weather. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM appear that Blue Gene/Fifty achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[10] At Supercomputing 2006,[11] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[12] In 2007, a team from the IBM Almaden Research Heart and the University of Nevada ran an artificial neural network almost half equally complex as the encephalon of a mouse for the equivalent of a second (the network was run at 1/x of normal speed for ten seconds).[13]

The proper noun [edit]

The name Bluish Factor comes from what it was originally designed to do, aid biologists empathize the processes of protein folding and gene evolution.[14] "Blue" is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene blueprint was renamed "Blue Gene/C" and eventually Cyclops64. The "L" in Blue Gene/L comes from "Low-cal" every bit that design'south original proper noun was "Bluish Lite". The "P" version was designed to be a petascale design. "Q" is just the letter after "P". There is no Bluish Gene/R.[15]

Major features [edit]

The Bluish Gene/50 supercomputer was unique in the following aspects:[16]

  • Trading the speed of processors for lower ability consumption. Bluish Gene/L used low frequency and low power embedded PowerPC cores with floating point accelerators. While the performance of each chip was relatively low, the organization could achieve better ability efficiency for applications that could utilise large numbers of nodes.
  • Dual processors per node with ii working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, only the processors share both the computation and the communication load.
  • System-on-a-scrap design. Components were embedded on a single chip for each node, with the exception of 512 MB external DRAM.
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications (broadcast and reductions), I/O, and direction
  • Lightweight OS per node for minimum system overhead (organization noise).

Architecture [edit]

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Bluish Gene/Fifty Compute or I/O node was a single ASIC with associated DRAM retention fries. The ASIC integrated two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blueish Gene/L node a theoretical superlative performance of 5.half dozen GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one some other.

Compute nodes were packaged two per compute card, with sixteen compute cards plus upward to ii I/O nodes per node board. There were 32 node boards per chiffonier/rack.[17] By the integration of all essential sub-systems on a single scrap, and the use of low-power logic, each Compute or I/O node dissipated low power (about 17 watts, including DRAMs). This allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard 19-inch rack, within reasonable limits of electrical power supply and air cooling. The operation metrics, in terms of FLOPS per watt, FLOPS per thousand2 of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to let the machine to go on to run.

Each Bluish Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer advice betwixt compute nodes, a commonage network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided advice to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blueish Gene/L system could exist partitioned into electronically isolated sets of nodes. The number of nodes in a segmentation had to be a positive integer power of 2, with at least iifive = 32 nodes. To run a programme on Bluish Cistron/Fifty, a segmentation of the computer was first to be reserved. The program was then loaded and run on all the nodes within the segmentation, and no other program could access nodes inside the partition while it was in use. Upon completion, the sectionalization nodes were released for time to come programs to use.

Bluish Cistron/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and simply one process could run at a time on node in co-processor manner—or one process per CPU in virtual mode. Programmers needed to implement green threads in social club to simulate local concurrency. Application evolution was usually performed in C, C++, or Fortran using MPI for advice. However, some scripting languages such as Ruby[xviii] and Python[19] have been ported to the compute nodes.

IBM has published BlueMatter, the application adult to do Blue Gene/L, as open source here.[twenty] This serves to certificate how the torus and collective interfaces were used by applications, and may serve equally a base for others to do the current generation of supercomputers.

Blue Factor/P [edit]

A schematic overview of a Bluish Gene/P supercomputer

In June 2007, IBM unveiled Blue Cistron/P, the 2nd generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Calculating Facility.[21]

Design [edit]

The design of Blue Gene/P is a technology development from Blueish Gene/L. Each Bluish Factor/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-style symmetric multiprocessor (SMP). The retentivity subsystem on the flake consists of modest private L2 caches, a primal shared 8 MB L3 enshroud, and dual DDR2 retentiveness controllers. The fleck besides integrates the logic for node-to-node communication, using the aforementioned network topologies as Blueish Gene/L, but at more than twice the bandwidth. A compute card contains a Blueish Factor/P chip with ii or 4 GB DRAM, comprising a "compute node". A unmarried compute node has a peak performance of xiii.half-dozen GFLOPS. 32 Compute cards are plugged into an air-cooled node lath. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[22] By using many small, depression-ability, densely packaged chips, Blue Factor/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007-2008.[2]

Installations [edit]

The following is an incomplete listing of Blueish Factor/P installations. Per November 2009, the TOP500 list contained xv Bluish Gene/P installations of two-racks (2048 nodes, 8192 processor cores, 23.86 TFLOPS Linpack) and larger.[1]

  • On November 12, 2007, the kickoff Blueish Gene/P installation, JUGENE, with xvi racks (16,384 nodes, 65,536 processors) was running at Forschungszentrum Jülich in Federal republic of germany with a performance of 167 TFLOPS.[23] When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a summit performance of ane PetaFLOPS. This configuration incorporated new air-to-water estrus exchangers betwixt the racks, reducing the cooling toll essentially.[24] JUGENE was shut down in July 2012 and replaced by the Bluish Cistron/Q system JUQUEEN.
  • The 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system at Argonne National Laboratory was ranked #3 on the June 2008 Height 500 list.[25] The Intrepid arrangement is one of the major resources of the INCITE program, in which processor hours are awarded to "chiliad claiming" science and engineering science projects in a peer-reviewed competition.
  • Lawrence Livermore National Laboratory installed a 36-rack Blue Factor/P installation, "Dawn", in 2009.
  • The King Abdullah Academy of Scientific discipline and Applied science (KAUST) installed a 16-rack Blue Factor/P installation, "Shaheen", in 2009.
  • In 2012, a 6-rack Blue Gene/P was installed at Rice University and volition be jointly administered with the University of São Paulo.[26]
  • A ii.5 rack Blue Cistron/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the netherlands and surrounding European countries. This awarding uses the streaming data capabilities of the machine.
  • A 2-rack Blue Gene/P was installed in September 2008 in Sofia, Bulgaria, and is operated by the Bulgarian Academy of Sciences and Sofia University.[27]
  • In 2010, a 2-rack (8192-core) Blue Gene/P was installed at the University of Melbourne for the Victorian Life Sciences Computation Initiative.[28]
  • In 2011, a 2-rack Blue Gene/P was installed at University of Canterbury in Christchurch, New Zealand.
  • In 2012, a ii-rack Blueish Gene/P was installed at Rutgers University in Piscataway, New Jersey. It was dubbed "Excalibur" as an homage to the Rutgers mascot, the Scarlet Knight.[29]
  • In 2008, a ane-rack (1024 nodes) Bluish Cistron/P with 180 TB of storage was installed at the University of Rochester in Rochester, New York.[30]
  • The first Blue Gene/P in the ASEAN region was installed in 2010 at the Universiti of Negara brunei darussalam Darussalam'due south enquiry centre, the UBD-IBM Eye. The installation has prompted enquiry collaboration between the academy and IBM research on climate modeling that will investigate the impact of climatic change on flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others.[31]
  • In 2013, a i-rack Blueish Gene/P was donated to the Department of Science and Technology for weather forecasts, disaster management, precision agronomics, and health information technology is housed in the National Reckoner Middle, Diliman, Quezon Urban center, under the auspices of Philippine Genome Middle (PGC) Core Facility for Bioinformatics (CFB) at Upwardly Diliman, Quezon Metropolis.[32]

Applications [edit]

  • Veselin Topalov, the challenger to the World Chess Champion title in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his grooming for the match.[33]
  • The Blueish Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing i.6 billion neurons with approximately ix trillion connections.[34]
  • The IBM Kittyhawk projection team has ported Linux to the compute nodes and demonstrated generic Spider web 2.0 workloads running at scale on a Bluish Gene/P. Their paper, published in the ACM Operating Systems Review, describes a kernel commuter that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity.[35] [36] Running standard Linux software like MySQL, their operation results on SpecJBB rank amidst the highest on record.[ citation needed ]
  • In 2011, a Rutgers University / IBM / University of Texas team linked the KAUST Shaheen installation together with a Bluish Gene/P installation at the IBM Watson Research Middle into a "federated high functioning computing cloud", winning the IEEE Calibration 2011 challenge with an oil reservoir optimization application.[37]

Blue Gene/Q [edit]

The third supercomputer design in the Blue Factor series, Blueish Factor/Q has a peak functioning of 20 Petaflops,[38] reaching LINPACK benchmarks performance of 17 Petaflops. Blue Gene/Q continues to expand and enhance the Bluish Cistron/50 and /P architectures.

Design [edit]

The Blue Gene/Q Compute fleck is an eighteen core chip. The 64-bit A2 processor cores are 4-manner simultaneously multithreaded, and run at ane.6 GHz. Each processor core has a SIMD Quad-vector double precision floating point unit (IBM QPX). 16 Processor cores are used for computing, and a 17th core for operating arrangement assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increment manufacturing yield. The spared-out core is shut down in functional operation. The processor cores are linked past a crossbar switch to a 32 MB eDRAM L2 enshroud, operating at half core speed. The L2 enshroud is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for diminutive operations.[39] L2 cache misses are handled by two congenital-in DDR3 retention controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-fleck links. The Bluish Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. Information technology delivers a pinnacle operation of 204.eight GFLOPS at ane.vi GHz, cartoon almost 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The flake is mounted on a compute card along with xvi GB DDR3 DRAM (i.e., 1 GB for each user processor core).[40]

A Q32[41] compute drawer contains 32 compute cards, each water cooled.[42] A "midplane" (crate) contains sixteen Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a full of 1024 compute nodes, 16,384 user cores and 16 TB RAM.[42]

Separate I/O drawers, placed at the summit of a rack or in a carve up rack, are air cooled and incorporate 8 compute cards and eight PCIe expansion slots for InfiniBand or x Gigabit Ethernet networking.[42]

Performance [edit]

At the time of the Bluish Gene/Q system announcement in November 2011, an initial 4-rack Bluish Factor/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.one TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack organization accomplished the peak position in the Graph500 listing[iii] with over 250 GTEPS (giga traversed edges per second). Blueish Cistron/Q systems likewise topped the Green500 list of most energy efficient supercomputers with upwards to 2.1 GFLOPS/W.[ii]

In June 2012, Blue Gene/Q installations took the height positions in all three lists: TOP500,[ane] Graph500[3] and Green500.[2]

Installations [edit]

The following is an incomplete listing of Blue Gene/Q installations. Per June 2012, the TOP500 listing contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1] At a (size-contained) ability efficiency of about 2.1 GFLOPS/W, all these systems also populated the elevation of the June 2012 Green 500 list.[ii]

  • A Blueish Gene/Q system called Sequoia was delivered to the Lawrence Livermore National Laboratory (LLNL) start in 2011 and was fully deployed in June 2012. It is part of the Advanced Simulation and Computing Programme running nuclear simulations and advanced scientific inquiry. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and one.6 PB of memory) covering an area of about iii,000 square feet (280 mii).[43] In June 2012, the system was ranked as the world'south fastest supercomputer.[44] [45] at twenty.1 PFLOPS peak, 16.32 PFLOPS sustained (Linpack), drawing up to 7.9 megawatts of power.[1] In June 2013, its performance is listed at 17.17 PFLOPS sustained (Linpack).[ane]
  • A 10 PFLOPS (peak) Blueish Gene/Q organisation called Mira was installed at Argonne National Laboratory in the Argonne Leadership Computing Facility in 2012. It consist of 48 racks (49,152 compute nodes), with lxx Lead of deejay storage (470 GB/due south I/O bandwidth).[46] [47]
  • JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Cistron/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500.[one]
  • Vulcan at Lawrence Livermore National Laboratory (LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019.[48] Vulcan served Lab-industry projects through Livermore'south High Performance Calculating (HPC) Innovation Center[49] as well as academic collaborations in back up of DOE/National Nuclear Security Administration (NNSA) missions.[50]
  • Fermi at the CINECA Supercomputing facility, Bologna, Italy,[51] is a ten-rack, 2 PFLOPS (peak), Blue Gene/Q system.
  • Every bit part of DiRAC, the EPCC hosts a six rack (6144-node) Blue Factor/Q system at the University of Edinburgh[52]
  • A five rack Bluish Gene/Q arrangement with additional compute hardware called AMOS was installed at Rensselaer Polytechnic Institute in 2013.[53] The organisation was rated at 1048.6 teraflops, the well-nigh powerful supercomputer at whatever private university, and third most powerful supercomputer among all universities in 2014.[54]
  • An 838 TFLOPS (peak) Blue Gene/Q system called Avoca was installed at the Victorian Life Sciences Ciphering Initiative in June, 2012.[55] This system is role of a collaboration betwixt IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.[56] The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.[57]
  • A 209 TFLOPS (peak) Blue Gene/Q system was installed at the University of Rochester in July, 2012.[58] This system is office of the Health Sciences Center for Computational Innovation, which is dedicated to the application of loftier-performance computing to research programs in the health sciences. The system consists of a single rack (1,024 compute nodes) with 400 TB of loftier-performance storage.[59]
  • A 209 TFLOPS tiptop (172 TFLOPS LINPACK) Blue Gene/Q system called Lemanicus was installed at the EPFL in March 2013.[60] This system belongs to the Centre for Advanced Modeling Science CADMOS ([61]) which is a collaboration between the three primary inquiry institutions on the shore of the Lake Geneva in the French speaking role of Switzerland : University of Lausanne, University of Geneva and EPFL. The arrangement consists of a single rack (i,024 compute nodes) with 2.1 Atomic number 82 of IBM GPFS-GSS storage.
  • A half-rack Blue Gene/Q system, with virtually 100 TFLOPS (peak), chosen Cumulus was installed at A*STAR Computational Resource Eye, Singapore, at early 2011.[62]

Applications [edit]

Tape-breaking science applications accept been run on the BG/Q, the first to cross ten petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[63] while the Cardioid code,[64] [65] which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near existent-time simulation, both on Sequoia. A fully compressible menstruation solver has too accomplished 14.four PFLOP/south (originally 11 PFLOP/s) on Sequoia, 72% of the motorcar's nominal peak performance.[66]

Encounter as well [edit]

  • CNK operating organisation
  • INK operating arrangement
  • Deep Blue (chess computer)

References [edit]

  1. ^ a b c d east f thou h i "November 2004 - TOP500 Supercomputer Sites". Top500.org . Retrieved 13 December 2019.
  2. ^ a b c d e "Green500 - TOP500 Supercomputer Sites". Green500.org. Archived from the original on 26 August 2016. Retrieved 13 Oct 2017.
  3. ^ a b c "The Graph500 List". Archived from the original on 2011-12-27.
  4. ^ Harris, Marking (September 18, 2009). "Obama honours IBM supercomputer". Techradar.com . Retrieved 2009-09-xviii .
  5. ^ "Supercomputing Strategy Shifts in a World Without BlueGene". Nextplatform.com. fourteen April 2015. Retrieved xiii October 2017.
  6. ^ "IBM to Build DoE's Next-Gen Coral Supercomputers - EE Times". EETimes. Archived from the original on 30 April 2017. Retrieved thirteen Oct 2017.
  7. ^ "Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer" (PDF). IBM Systems Journal. 40 (2). 2017-x-23.
  8. ^ "A Talk with the Encephalon behind Blue Cistron", BusinessWeek, Nov 6, 2001
  9. ^ "BlueGene/50". Archived from the original on 2011-07-18. Retrieved 2007-10-05 .
  10. ^ "hpcwire.com". Archived from the original on September 28, 2007.
  11. ^ "SC06". sc06.supercomputing.org . Retrieved 13 October 2017.
  12. ^ "Archived copy". Archived from the original on 2006-12-11. Retrieved 2006-12-03 . {{cite web}}: CS1 maint: archived copy as title (link)
  13. ^ "Mouse brain imitation on calculator". BBC News. April 27, 2007. Archived from the original on 2007-05-25.
  14. ^ "IBM100 - Blue Gene". 03.ibm.com. 7 March 2012. Retrieved xiii October 2017.
  15. ^ Kunkel, Julian M.; Ludwig, Thomas; Meuer, Hans (12 June 2013). Supercomputing: 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings. Springer. ISBN9783642387500 . Retrieved 13 October 2017 – via Google Books.
  16. ^ "Blue Gene". IBM Journal of Research and Development. 49 (2/3). 2005.
  17. ^ Kissel, Lynn. "BlueGene/L Configuration". asc.llnl.gov. Archived from the original on 17 February 2013. Retrieved thirteen October 2017.
  18. ^ "ece.iastate.edu". Archived from the original on Apr 29, 2007.
  19. ^ William Scullin (March 12, 2011). Python for High Performance Computing. Atlanta, GA.
  20. ^ Blue Matter source code, retrieved Feb 28, 2020
  21. ^ "IBM Triples Functioning of Globe'due south Fastest, Nearly Energy-Efficient Supercomputer". 2007-06-27. Retrieved 2011-12-24 .
  22. ^ "Overview of the IBM Blue Cistron/P projection". IBM Journal of Enquiry and Evolution. 52: 199–220. Jan 2008. doi:ten.1147/rd.521.0199.
  23. ^ "Supercomputing: Jülich Amongst World Leaders Once again". IDG News Service. 2007-11-12.
  24. ^ "IBM Press room - 2009-02-x New IBM Petaflop Supercomputer at High german Forschungszentrum Juelich to Be Europe'south Most Powerful". 03.ibm.com. 2009-02-10. Retrieved 2011-03-11 .
  25. ^ "Argonne's Supercomputer Named Earth'southward Fastest for Open Science, Third Overall". Mcs.anl.gov. Archived from the original on 8 Feb 2009. Retrieved 13 October 2017.
  26. ^ "Rice University, IBM partner to bring starting time Blue Cistron supercomputer to Texas". news.rice.edu.
  27. ^ Вече си имаме и суперкомпютър Archived 2009-12-23 at the Wayback Motorcar, Dir.bg, ix September 2008
  28. ^ "IBM Press room - 2010-02-eleven IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Enquiry - Australia". 03.ibm.com. 2010-02-xi. Retrieved 2011-03-11 .
  29. ^ "Archived copy". Archived from the original on 2013-03-06. Retrieved 2013-09-07 . {{cite web}}: CS1 maint: archived copy as title (link)
  30. ^ "Academy of Rochester and IBM Expand Partnership in Pursuit of New Frontiers in Wellness". University of Rochester Medical Center. May eleven, 2012. Archived from the original on 2012-05-eleven.
  31. ^ "IBM and Universiti Brunei Darussalam to Interact on Climate Modeling Research". IBM News Room. 2010-ten-13. Retrieved 18 October 2012.
  32. ^ Ronda, Rainier Allan. "DOST's supercomputer for scientists now operational". Philstar.com . Retrieved 13 October 2017.
  33. ^ "Topalov training with super computer Blue Factor P". Players.chessdo.com . Retrieved 13 Oct 2017.
  34. ^ Kaku, Michio. Physics of the Time to come (New York: Doubleday, 2011), 91.
  35. ^ "Project Kittyhawk: A Global-Scale Reckoner". Research.ibm.com . Retrieved 13 Oct 2017.
  36. ^ https://wayback.archive-information technology.org/all/20081031010631/http://weather condition.ou.edu/~apw/projects/kittyhawk/kittyhawk.pdf
  37. ^ "Rutgers-led Experts Assemble World-Spanning Supercomputer Cloud". News.rutgers.edu. 2011-07-06. Archived from the original on 2011-11-10. Retrieved 2011-12-24 .
  38. ^ "IBM announces 20-petaflops supercomputer". Kurzweil. eighteen November 2011. Retrieved 13 Nov 2012. IBM has announced the Blue Gene/Q supercomputer, with elevation performance of 20 petaflops
  39. ^ "Memory Speculation of the Blue Gene/Q Compute Flake". Retrieved 2011-12-23 .
  40. ^ "The Blue Gene/Q Compute bit" (PDF). Archived from the original (PDF) on 2015-04-29. Retrieved 2011-12-23 .
  41. ^ "IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications" (PDF). 01.ibm.com . Retrieved 13 October 2017.
  42. ^ a b c "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 2010-eleven-22. Retrieved 2010-11-25 .
  43. ^ Feldman, Michael (2009-02-03). "Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q". HPCwire. Archived from the original on 2009-02-12. Retrieved 2011-03-11 .
  44. ^ B Johnston, Donald (2012-06-18). "NNSA'due south Sequoia supercomputer ranked as globe's fastest". Archived from the original on 2014-09-02. Retrieved 2012-06-23 .
  45. ^ "TOP500 Press Release". Archived from the original on June 24, 2012.
  46. ^ "MIRA: World'south fastest supercomputer - Argonne Leadership Computing Facility". Alcf.anl.gov . Retrieved thirteen Oct 2017.
  47. ^ "Mira - Argonne Leadership Computing Facility". Alcf.anl.gov . Retrieved thirteen Oct 2017.
  48. ^ "Vulcan—decommissioned". hpc.llnl.gov . Retrieved 10 April 2019.
  49. ^ "HPC Innovation Middle". hpcinnovationcenter.llnl.gov . Retrieved thirteen October 2017.
  50. ^ "Lawrence Livermore'south Vulcan brings 5 petaflops calculating power to collaborations with industry and academia to accelerate science and technology". Llnl.gov. 11 June 2013. Retrieved xiii October 2017.
  51. ^ "Archived copy". Archived from the original on 2013-10-30. Retrieved 2013-05-13 . {{cite web}}: CS1 maint: archived copy equally title (link)
  52. ^ "DiRAC BlueGene/Q". epcc.ed.air conditioning.united kingdom.
  53. ^ "Rensselaer at Petascale: AMOS Among the World's Fastest and Virtually Powerful Supercomputers". News.rpi.edu . Retrieved 13 October 2017.
  54. ^ Michael Mullaneyvar. "AMOS Ranks 1st Amid Supercomputers at Individual American Universities". News.rpi.edi . Retrieved xiii October 2017.
  55. ^ "Globe's greenest supercomputer comes to Melbourne - The Melbourne Engineer". Themelbourneengineer.eng.unimelb.edu.au/. 16 February 2012. Retrieved 13 October 2017.
  56. ^ "Melbourne Bioinformatics - For all researchers and students based in Melbourne'southward biomedical and bioscience research precinct". Melbourne Bioinformatics . Retrieved 13 October 2017.
  57. ^ "Admission to High-stop Systems - Melbourne Bioinformatics". Vlsci.org.au . Retrieved 13 October 2017.
  58. ^ "University of Rochester Inaugurates New Era of Health Care Research". Rochester.edu . Retrieved 13 Oct 2017.
  59. ^ "Resources - Middle for Integrated Research Computing". Circ.rochester.edu . Retrieved 13 October 2017.
  60. ^ "EPFL BlueGene/L Homepage". Archived from the original on 2007-12-10. Retrieved 2021-03-10 .
  61. ^ Utilisateur, Super. "À propos". Cadmos.org. Archived from the original on 10 January 2016. Retrieved 13 October 2017.
  62. ^ "A*STAR Computational Resource Centre". Acrc.a-star.edu.sg . Retrieved 2016-08-24 .
  63. ^ South. Habib; Five. Morozov; H. Finkel; A. Pope; Thou. Heitmann; K. Kumaran; T. Peterka; J. Insley; D. Daniel; P. Fasel; N. Frontiere & Z. Lukic (2012). "The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q". arXiv:1211.4864 [cs.DC].
  64. ^ "Cardioid Cardiac Modeling Project". Researcher.watson.ibm.com. 25 July 2016. Retrieved 13 Oct 2017.
  65. ^ "Venturing into the Centre of High-Performance Computing Simulations". Str.llnl.gov. Archived from the original on fourteen February 2013. Retrieved 13 October 2017.
  66. ^ "Cloud cavitation plummet". Proceedings of the International Conference for High Performance Calculating, Networking, Storage and Assay on - SC '13. SC 'xiii: i–13. 17 November 2013. doi:10.1145/2503210.2504565. ISBN9781450323789. S2CID 12651650.

External links [edit]

  • IBM Research: Blue Gene
  • Next generation supercomputers - Blue Factor/P overview (pdf)
Records
Preceded by

NEC Globe Simulator
35.86 teraflops

World'due south nearly powerful supercomputer
(Blue Cistron/Fifty)

November 2004 – November 2007
Succeeded by

IBM Roadrunner
1.026 petaflops

DOWNLOAD HERE

Posted by: davisfarretionly.blogspot.com