Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Another user emailed me saying he also saw this problem. His solution (which I tried successfully) was to re-create the project from scratch. So, if you upgrade from 8.1.01 to 8.1.02 you may see Map fail. I've opened a Webcase with Xilinx. John ProvidenzaArticle: 97401
Jim Granville wrote: > Many FPGA systems now include SoftCPUs, and that puts some portion > of the design clearly into the software box. In a prior thread about HDL's the VHDL camp was VERY VERY proud to state they didn't need sequential programming languages like C based HDL/HLL's, since VHDL already had nearly functionally idential sequential statements and expressions, including extended data types like floating point. *IF* software is defined as sequential programming in a high level (or any level) language, then sequential VHDL (and verilog) clearly passes the walks like a duck, quacks like a duck, looks like a duck test for being software. I'm sure any lawyer would have little trouble showing C source and VHDL source for sequential floating point data manipulation as functionally identical, and a jury probably would completely ignore any hardware designer claiming that it was hardware because his degree was in electrical engineering and not computer science. Ditto for control and data paths with integer data representations. Ditto for control and data paths with bit level representations. Bit level "var1 AND var2" is the same if written var1*var2 or var1&var2, syntax doesn't matter. There is a lot of chest beating about being a hardware designer, and having been formally trained in the lost art of schematic logic level design. Every year, as hardware design is dominated by large HDL systems designs, using algorithmic based FSMs and data paths in languages which are functionally identical to HLL programming languages, the difference and importanance of obsolete logic level design training vanishes. If anything, programmers which have been trained to design, implement, and manage the life cycles of huge zero defect software projects are much better prepared to manage large HDL/HLL hardware development projects as automated synthesis completely replaces gate level design as an acceptable design practice. Language features which impair the ability of synthesis tools to always generate formally verifiable and provably correct constructions will slowly fall by the way side, and with it the ability to use arcane language hints to save a single gate in a few hundred instances of a large 20M gate design. Saving a few hundred (or thousand) gates in a $100 20M gate device if it risks design errors that would cause a field problem or development delay are a liability, not a necessary feature. HLL/HDL language features which are simply there to ensure job security for a certain class of programmers, will fade away, in favor of expanding the labor pool and the ability to reused engineering talent in multiple disciplines. Even in the software world, there are MANY zero defect design environments ... from space craft mission systems to financial banking/transactions systems where bugs are simply not an option and rigorous testing is done before taking any line of code production. And there are many hardware designs shipped just as buggy as your favorite Microsoft product, where 99.9% functionality with a one year operational life is good enough to ship, especially if there is a watch dog reset to clear latchups caused by known race conditions or power/ground rail instabilities at marginal operating conditions.Article: 97402
On Tue, 21 Feb 2006 14:37:43 -0800, Dave Pollum wrote: > My VHDL project has out grown a XC95108 CPLD, so I'll be using a > XC95144 instead. After running the ISE synthesizer and fitter, all of > the XC95144's Function Block Inputs are used. Using exhaustive fit > mode, 92% of the function block inputs are used. This still doesn't > leave much room for additional features. I then told ISE to use a > XC95144XL, instead. Only 64% of the function block inputs are used, and > the other resources look good, too. [...] The odd thing is that > the XL version uses _8 more_ flip flops than the standard version, and > the timing report shows that the XL is faster than the std part, even > though I selected 10 ns speed grade for both parts. I haven't > simulated both chips yet. The difference in flip-flop count could be due to the synthesiser using another kind of implementation for some feature for your older CPLD. For example, there are a couple of ways to encode a FSM, some being smaller and others being a bit bigger but faster. Secondly, the 10 ns speed grade only says something about the pad-to-pad delay. If I recall correctly, this is the cumulative delay of a typical signal path. Since the -XL is a whole different chip (3,3V instead of 5V logic) the component delays making up the total delay are bound to be different. Hope that helps. Wouter -- Replace "spamtrap" with first name for email address.Article: 97403
Andy wrote: > > Do you recommend separate ibufds primitives, or a single > ibufds_diff_out primitive? > The only reason I started using two IBUFDS's instead of one IBUFDS_DIFF_OUT was to avoid various tool bugs that dropped placement, I/O standard, and termination attributes when applied to the IBUFDS_DIFF_OUT components. The IBUFDS_DIFF_OUT is really just two IBUFDS's in disguise for V2/S3, but I haven't looked at the V4 implementation. BrianArticle: 97404
nestorj@lafayette.edu wrote: > I am building a design which consists of a two-dimensional array of > smalll processing elements, each of which contains about 60 LUTs. The > two dimensional array is constructed in Verilog by creating a "column" > module which contains instances all of the PEs in a column An "array" > module then puts together the columns. > > I would like to create placement constraints to make the placer follow > the array structure. So I dug around in the documentation and found > "RLOC", which sounds like it specifies what I want. So I added > constraints to each column array that look something like this: > > module PE_column(blah blah) > > PE_instance C0(blah blah blah); > // synthesis attribute RLOC of C0 is "x0y0" > > PE_instance C1(blah blah blah); > // synthesis attribute RLOC of C1 is "x0y1" > > etc. > > endmodule > > I tried something similar with to place the columns at X0y0, x1y0, etc. > at the next level up in the hierarchy. > > When I run XST, it dutifully reports the constraints, but the placer > apparently ignores them. > > If someone could point out what I'm missing, I'd really appreciate it. > Is it necessary to make the lower-level instances RPMs? If so, what is > the (currently) easiest way to do this? > > BTW I am using XST in ISE 7.1.03I compiling to a Xc2v6000. > > Thanks! > > John Nestor > Lafayette College > The RLOCs only apply to primitives, and hierarchical levels below with RLOCs on the primitives. In other words, they have to be put on the elements that are actually used in the FPGA. So yes, the lower level instances must also be RPMs, all the way down to the the primitive level. You may be able to use an area constraint for what you are asking to do.Article: 97405
Hal wrote: > >Does this run into skew problems between the main clock and the IOB clock? > The output DDR nets traverse loaded->unloaded, which shouldn't be a problem ( except for the usual caveat about perhaps clocking the falling edge data with a falling edge clock ahead of the IOB ). DDR inputs traverse unloaded->loaded, which might require opposite edge or 90/270 phasing. IIRC, for fast V2 DDR inputs I used two differential local clock inputs ( to work around limited local clock routing resources ), DDR registers implemented in CLBs ( published IOB timing at the time was obsfucated by the inclusion of DCM jitter in IOB setup/hold numbers), and a global clock input driving a DCM to generate 90/270 phases to help reclock the two-wide data path phases into the global clock domain. maybe I should have used input latches instead :) BrianArticle: 97406
fpga_toys@yahoo.com wrote: > Isaac Bosompem wrote: > > For me the biggest hurlde of learning to utilize VHDL was programming > > my brain to not think of it as a programming language. Then everything > > began to fall into place. > > Interesting discussion. In a prior discussion regarding "programming" > or "designing" with C syntax HLL or HDLs, it was interesting how many > people took up arms that they could do everything in VHDL or Verilog > that could be done with a C based fpga design language such as > Celoxica's Handel-C, Impulse-C, FpgaC or similar tools. That arguement > was that VHDL/Verilog really isn't any different that C based HLL/HDL's > for FPGA design, and frequently with the assertion that VHDL/Verilog > was better. Definitely C is linked fairly close to VHDL/Verilog. But there are a few key differences that I had to consider when learning HDL's to truly understand what was going on. For example the non-blocking statements in a clocked sequential processes in VHDL. I orignally assumed that like software , signal assignments would happen instantly after the line has executed, but I was wrong. A few minutes playing around with ModelSim revealed that they occur on the following clock pulse (when the flip flops sample the data input). So there was a bit of a retraining process even though the syntax was somewhat familiar. > So is an fpga design in VHDL/Verilog hardware, and the same realized > equiv gates written in in Celoxica's Handel-C software just because of > the choice of language? Or is a VHDL/Verilog design that is the same > as a Handel-C design software? This is a fairly tough question as we wouldn't be discussing this if this was something that we could all agree on. I believe that both are hardware and I will explain my reasoning: FpgaC for example is a totally different ball game from VHDL/Verilog but they ultimately result in a piece of hardware at the output. FpgaC (from the example posted at the TMCC's website at U of Toronto, where I happen to live :) ) hides completely the hardware elements from the designer. Allowing them to give a software-like *DESCRIPTION* (key word) of the hardware. What you get is ultimately hardware that implements your "program". VHDL/Verilog on the other hand do hide most of the grunt work of doing digital design but still you have somethings left over like what I pointed out above about the non blocking signal assignments. We have always progressed towards abstraction in the software world,similar pushes have also been made in the hardware world with EDA's and CAD software packages like MATLAB, which automate most of the grunt work. Perhaps program like HDL's are the new progression. All I can say though, is only time will tell. It depends on how well compilers like FpgaC will be able to convert a program to hardware description. Also how well it be able to extract and fine opportunities for concurrency. -IsaacArticle: 97407
Sorry I have a bad habit of not reading through my replies. I am using Google so please spare me :) I meant "Perhaps programs like FPGAC are the new progression"Article: 97408
On 21 Feb 2006 13:20:50 -0800, "aayush" <aayush_v2@rediffmail.com> wrote: > ethernet traffic is digital. not if you're looking at the cat3 wire. > now a logic "zero" must be a >band of voltage and similarly a logic "one". Not really. For 10bt, a {high,low} is one and {low,high} is zero. Read about Manchaster coding. For 100btx it is quite a bit more complicated. > now i wanted to ask if i am >using a 10/100 Mbps ethernet card or lan card as they are called what >will be my voltages level. ie the RJ 45 female of my lan card will send >a logic "0" at which voltage/voltage band and on what voltage/voltage >band will be logic"1". its important to find these as i am connecting >FPGA to lan card and hoping that signals fm lan card does not blow up >the damn chip. You always need a network chip (at least a PHY or PHY+MAC) which you connect to the RJ45 to talk to the wire. You shouldn't connect the FPGA to the RJ45 directly. If you connect FPGA to the LAN card you don't need to know anything about what goes on the wire as it completely isolates it from you and gives you well-behaved bits.Article: 97409
On Wed, 22 Feb 2006 11:47:56 +1300, Jim Granville <no.spam@designtools.co.nz> wrote: ... > There is another thread, where this actually matters from a medical >systems /regulatory basis. > > Since you must have SOFTWARE to create the bitstream, then the >admin has to include software-handling discipline. Does this include almost all ASIC design where synthesis SOFTWARE is still used to generate gates from RTL, SOFTWARE is used to place those gates and SOFTWARE is used to route the connections (not to mention sw to run lvs/drc etc.) ? It must also include then even any schematic entry based ASICs because SOFTWARE is used to enter/netlist all the schematics. Forcing software handling discipline if software is in the path is not an easy requirement in my opinion. Unless you want to go back to paper napkin diagrams and tape over transparencies.Article: 97410
mk wrote: > Forcing software handling discipline if software is in the path is not > an easy requirement in my opinion. Unless you want to go back to paper > napkin diagrams and tape over transparencies. Forcing software handling discipline on software teams isn't easy either.Article: 97411
We are a startup company of 6 employees (2 full time) which is the exclusive licensee of the patented ideal optimum MAC protocol. Our prototype PCI based network interface card hardware design has been completed and we are looking for partners who can help with the simulation and design of an Altera Cyclone FPGA. Unfortunately, our R&D budget is running thin, so we must appeal to the FPGA community for help. The goal of this research project is to complete a 100Mbps proof of concept system, so that we can attract significant investment. Initial target markets for the production level 1/10Gbps networks will be applied in HPC, wireless, and VoIP. Conceptually, you can think of this technology as becoming the Linux of layer 2. Just as Linux has prevailed in unseating Windows at the OS layer, we will prevail in unseating Ethernet as well as all of the middle hardware that Ethernet requires for its shortcomings. Qualified individuals will be awarded with stock options for their participation.Article: 97412
mk wrote: > On Wed, 22 Feb 2006 11:47:56 +1300, Jim Granville > <no.spam@designtools.co.nz> wrote: > > ... > >> There is another thread, where this actually matters from a medical >>systems /regulatory basis. >> >> Since you must have SOFTWARE to create the bitstream, then the >>admin has to include software-handling discipline. > > > Does this include almost all ASIC design where synthesis SOFTWARE is > still used to generate gates from RTL, SOFTWARE is used to place those > gates and SOFTWARE is used to route the connections (not to mention > sw to run lvs/drc etc.) ? > > It must also include then even any schematic entry based ASICs because > SOFTWARE is used to enter/netlist all the schematics. > > Forcing software handling discipline if software is in the path is not > an easy requirement in my opinion. Unless you want to go back to paper > napkin diagrams and tape over transparencies. "software-handling discipline" relates to the tools, as much as your own code. It is fairly common practice to archive the tools, when a design is passed to production, and then ALL MAINT changes are done with those tools. So just because what you ship the customer might look like HW, you still have to do risk-reduction in house. For a live, and classic, example look at the ISE v8 release. Some of the flaws that shipped in this, are frankly amazing, and one wonders just what regression testing was done.... -jgArticle: 97413
Isaac Bosompem wrote: > We have always progressed towards abstraction in the software > world,similar pushes have also been made in the hardware world with > EDA's and CAD software packages like MATLAB, which automate most of the > grunt work. Perhaps program like HDL's are the new progression. Actually VHDL stuck it's toes into this some 20 years back. By 1993 1076.2 Standard Mathematical Package was part of the standards proces, then 1076.3 Numeric Standards, not long later IEEE 1076.3/floating point, and then discussions for supporting sparse arrays and other very high level concepts for pure mathmatical processing rather than hardware logic from a traditional view point. Interest in C based HDL/HLL's for hardware design predate even Dave's TMCC work which is also over a decode old. So, I don't think it's all that new. Rather it started with sequential high level syntax and automatic arithmetic/boolean expression processing was added to VHDL. When computers were expensive in the 1960's and 1970's we traded design labor for microcode and assembly language designs (frequently done by EE's). As computers dropped drastically in price, that practice became rapidly not cost effective, and was almost completely replaced with higher, and higher levels of abstract language compilers to improve design productivity traded off against inexpensive computer cycles. We see the same process logic "hardware logic simulators" .... AKA FPGA's where they have dropped rapidly in price, allowing huge designs to be implemented on them that is no long cost effective in schematic form. And, we are seeing even larger designs implemented that are not even cost effective to design at the gate level using first generation HDL's that allow the designer to waste design labor on detailed gate level design. Hardware development with 2nd and third generation description languages is likely to follow the software model of using higher degrees of abstraction specifically to prevent designers from obsessing over a few gates, and in the process creating non-verifably correct designs which may break when ported to the next generation FGPA or logic platform. > All I can say though, is only time will tell. It depends on how well > compilers like FpgaC will be able to convert a program to hardware > description. Also how well it be able to extract and fine opportunities > for concurrency. FpgaC/TMCC has a number of things that are less than optimal, but it rests on a process of expressing all aspects of the circuit on boolean expressions, then agressively optimizing that netlist. The results are suprising to some, but hey, it's really not new, as VHDL has covered nearly the same high level language syntax to synthesis too. I think what is suprising to some, is that low level software design is long gone, and low level hardware design is soon to be long gone for all the same reasons of labor cost vs. hardware cost.Article: 97414
hi, i'm trying to work with the ddr sdram on the ml310 development board. i tried creating a simple base system with the ddr using the bsb wizard. i also changed the ucf file as in the sample file provided. however, i keep getting 'memory test - failed" when running the generated memory test program. can anybody help me?Article: 97415
Sorry to take so long to reply, I wanted to look at the V4 and ADS5273 datasheets first. ------------------- Master/Slave ISERDES ------------------- Sean wrote: > >But, you were right about the IBUFDS_DIFF_OUT: Those still exist in V4, >and *tadaaa* I just found out you can actually feed the two ISERDES in >one IOB tile with those 2 inverted outputs. > OK, but can you independently invert the clock on the second ISERDES in the same tile to accomplish what you want? ( I haven't created a test design to check this. ) Although, rather than trickery with two ISERDES, it looks like you could just do 6 bit data with the master/slave configuration shown on page 7 of XAPP705 (v1.2) Then you could use the clocking scheme shown in UG070 (v1.4) p39, with a BUFIO clocking the I/O and driving a BUFR regional clock; the BUFR can then divide by six to get you a 2x clock. Add a word alignment shifter, and this would get you data, enable, and a 2x regional clock for each A/D without using a single DCM or global clock resource. ------------------- 12 bit word sync ------------------- Rather than attempting to have a matching global clock for each A/D to maintain word sync, I'd try either: - treat the word clock like a data bit, and sample it to find the proper alignment shift ( may be only enough timing margin to sample the word clock on one edge of the bit-rate clock ) or, - use the built in ADS5273 sync patterns for word alignment ------------------- Cin vs. Zout ------------------- One thing that jumped out at me from the ADS5273 datasheet was the output slew rate and impedance specs: LVDS Outputs Rise/Fall Time (typ) 2.5mA 400 ps 3.5mA 300 ps 4.5mA 230 ps 6.0mA 180 ps Differential Output Impedance 13 Kohm If you hook such a part directly to an FPGA with 10 pF of input capacitance, Very Bad Things will happen to your 840 Mbps data due to the input reflections off the FPGA. Most likely, the guys doing the precision mixed signal part intentionally decided not to absorb the reflections on-chip, requiring you do do this externally. For a short, well designed 100 ohm differential net, I'd recommend starting with somthing like this: - crank up the drive level on the ADS9753 to the mid/high range - place a 100 ohm back termination and 3 dB differential attenuator at the source (1) - place a 6 dB differential attenuator at the FPGA - use the _DT internal terminations in the FPGA - plenty of simulation, prototyping, and real world measurements Starting with 9 dB of intentional forward loss may be too much, but it makes for a return loss of at least 18 dB even with a horrible load at the far end; experimenting with more/less drive and less/more attenuation should find a happy medium. Also, using differential terminations at both ends doesn't help with any common mode crud, but adding extra parts to do a common mode termination would also add more parasitics. (1) Digikey, 3db, 100 ohm differential, 0404, EXB-24AB3CR8X ------------------- A/D Clocking ------------------- > >> Clocking high speed A/D's with an FPGA generated clock is a very >> bad idea, as the inherent DCM & SSO jitter will quickly render the >> sub-ps RMS A/D aperture jitter specs useless, giving you maybe >> a handful of effective bits worth of data at the rated A/D input >> bandwidth. > >You're absolutely right, which is why I'm looking for programmable >external clock sources at the moment. BTW, any recommendations? > I'd need more info about your application ( input signal freq/BW, why the need for varying A/D clocks ) to make a definite assessment, but I'd advise staying far away from any programmable PLL based clock sources. To avoid compromising the input aperture specs of the ADS5273 at the rated input bandwidth, I'd probably use a good/great crystal oscillator, with a passive ( power divider + baluns ) or {P}ECL ('EL14) clock distribution network if multiple A/D's are needed. A low jitter buffer/divider intended for clocking fast A/D's, something like the recent AD9514, may work fine, but I haven't tested any of those parts. Here are some online references for A/D clock jitter specs, and conversions between clock jitter and Effective_Bits(Input_Freq): Analog Devices AN-501 <http://www.analog.com/UploadedFiles/Application_Notes/547373956522730668848977365163734AN501.pdf> Analog Devices AN-756 <http://www.analog.com/UploadedFiles/Application_Notes/534504114752208671024345AN_756_0.pdf> Analog Devices AN-741 http://www.analog.com/UploadedFiles/Application_Notes/54506699244016AN741_0.pdf Analog Devices (Radio 101) http://www.analog.com/UploadedFiles/Technical_Articles/480501640radio101.pdf RF Design http://rfdesign.com/images/archive/0802Goldberg26.pdf have fun, BrianArticle: 97416
Hello group, I wish to replaced the inferred tri-state pins shown in the code below with primitives. How do I do that? Do I use Virtex-4 ODDR or OSERDES primitives? Brad Smallridge Ai Vision sram_tristate_process:process(sram_clk) begin if( sram_clk'event and sram_clk='1') then if( sram_wr_2='1' ) then sram_flash_data <= sram_write_data; else sram_flash_data <= (others=>'Z'); end if; end if; end process; sram_read_data_process:process(sram_clk) begin if(sram_clk'event and sram_clk='1') then sram_read_data <= sram_flash_data; -- 36 bit end if; end process;Article: 97417
hmurray@suespammers.org (Hal Murray) writes: > How many people know the story of the Therac-25? > > Is that life support gear? It's life critical, in that it can kill or seriously injure someone if it fails. In the same sense, the brakes of an automobile may be considered life critical. I'm not sure if I would call it life support, because it is generally not necessary to keep a patient alive in the short term, in the manner that a respirator would. > Imaging? Did it serve any imaging function? The primary purpose was application of therapeutic radiation, but perhaps it also provided imaging used by the operator for precise alignment of the target area?Article: 97418
Steve Lass wrote: > ISE Simulator Lite is included free with WebPACK and Foundation. It > has limit of around 10,000 lines of code. > > ISE Simulator is only available to Foundation customers and costs $995. It > has no line limit. Does the Lite version have any other limitations relative to the full version? For instance, is there any artificial slowdown? EricArticle: 97419
Jim Granville wrote: > For a live, and classic, example look at the ISE v8 release. > Some of the flaws that shipped in this, are frankly amazing, > and one wonders just what regression testing was done.... Or more importantly, why the select beta list developers designs didn't stumble into the same problems. In large software land, alpha and beta pre-release cycles are the critical part of not slamming your complete customer base with critical bugs. The alpha and beta testers willing to do early adoption testing is probably one of the most prized vendor assets, and carefully controlled access resources, that any software vendor can develop. And for that priv, and to build that relationship, it's frequently necessary to give your product away to those early adopters long term .... both the beta's and the clean releases that follow.Article: 97420
Most likely it slows down. I can't speak for Xilinx but the Modelsim Xilinx tools get faster the more money you spend... up to a point... then they slow down again. I have noticed the current release on Modelsim is considerably slower the 5.x (which we still use at work). I believe they increased the memory footprint and added geewiz which just caused it to run slower. M2C Simon "Eric Smith" <eric@brouhaha.com> wrote in message news:qhu0asx89k.fsf@ruckus.brouhaha.com... > Steve Lass wrote: > > ISE Simulator Lite is included free with WebPACK and Foundation. It > > has limit of around 10,000 lines of code. > > > > ISE Simulator is only available to Foundation customers and costs $995. It > > has no line limit. > > Does the Lite version have any other limitations relative to the full > version? For instance, is there any artificial slowdown? > > EricArticle: 97421
Well if its the Linux of layer 2.. then you can't have an exclusive patent and you have to release source code :-) It sounds an interesting idea. But the OSI model is there for a reason. Its a fairly generic thing and you might have difficulty calling it 802.x without it as its fairly well sprinkled around the spec. But I have been designing Ethernet interfaces without any OSI model for years... I wonder what makes your any better? If you want to contact me... let me know. Simon "Perfect Queue" <jonathangael@hotmail.com> wrote in message news:1140575213.319606.10180@f14g2000cwb.googlegroups.com... > We are a startup company of 6 employees (2 full time) which is the > exclusive licensee of the patented ideal optimum MAC protocol. Our > prototype PCI based network interface card hardware design has been > completed and we are looking for partners who can help with the > simulation and design of an Altera Cyclone FPGA. Unfortunately, our > R&D budget is running thin, so we must appeal to the FPGA community for > help. > > The goal of this research project is to complete a 100Mbps proof of > concept system, so that we can attract significant investment. Initial > target markets for the production level 1/10Gbps networks will be > applied in HPC, wireless, and VoIP. > > Conceptually, you can think of this technology as becoming the Linux of > layer 2. Just as Linux has prevailed in unseating Windows at the OS > layer, we will prevail in unseating Ethernet as well as all of the > middle hardware that Ethernet requires for its shortcomings. > > Qualified individuals will be awarded with stock options for their > participation. >Article: 97422
Oh I'm so shocked.. I couldn't even get as far as a map with 8.1.02 :-) Simon "johnp" <johnp3+nospam@probo.com> wrote in message news:1140562495.058573.237870@o13g2000cwo.googlegroups.com... > Another user emailed me saying he also saw this problem. His > solution (which I tried successfully) was to re-create the project > from scratch. > > So, if you upgrade from 8.1.01 to 8.1.02 you may see Map fail. > > I've opened a Webcase with Xilinx. > > John Providenza >Article: 97423
Depends.. what say its an automatic blood pressure machine that pumps up and down.. and crushes an arm... or just explodes the air bag causing a slight burse to the upper arm... or a false reading causes the doctor to give a medicine that's a poison for the patient? "Jeremy Stringer" <jeremy@_NO_MORE_SPAM_endace.com> wrote in message news:43fb805a@clear.net.nz... > Hal Murray wrote: > >>The problem is that the regs require similar levels of documented > >>development processes for life support as they do for any supporting > >>medical electronics, such as imaging products. > > > > > > How many people know the story of the Therac-25? > > > > Is that life support gear? Imaging? > > Is that the famous one about the X-ray machine that irradiated people > with 100X dose? > > That case you could call imaging, but the consequences of the particular > failure (lethal radiation doses) are a little different from, for > instance, the failure of a blood pressure machine. > > 2c > JeremyArticle: 97424
It is kind of ironic where we have come from. It is too easy to think of a HDL as ASIC and HDL as FPGA perfoming as well forgetting about any potential logic upsets. I wonder how some of the FPGA based MP3 players behave on air flights, or even how reliable laptops are up there. So FPGAs and large cache CPUs are more similar as the SRAM area dominates. I am under the impression though that the BRAM is the usual generic 6/8 T cell with modest Cs and the config bits are entirely bigger with far more robust cells given the nature of the beast. I would expect the logic to be far more reliable than the data it is processing. Twenty + years ago everyone knew that SRAM was bigger, more expensive, much faster, and more reliable than DRAMs by a long shot and people would wonder how much better CPUs would be if only they could have an all SRAM main memory system like some CRAYs had. Today most everyone esp outside hardware still holds onto that idea, yet DRAM is now by definition far less susceptible to upsets and can even be quite fast in the few ns row cycle times for embedded DRAM cores and still many times cheaper per bit, smaller and vastly less power consumption, but requires extra process steps so extra $. I am not sure that we will ever see DRAM used in FPGAs but perhaps instead we could treat the SRAM cell as a dynamic node to be refreshed by a more reliable corrected memory system. Perhaps an embedded DRAM block could be used to routinely refresh the distributed SRAM arrays while checking at same time and report differences while doing so, perhaps not.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z