Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 24150

Article: 24150
Subject: Re: Implementation
From: Ray Andraka <ray@andraka.com>
Date: Thu, 27 Jul 2000 20:07:36 GMT
Links: << >>  << T >>  << A >>
Right, but be careful here.  Many of the DCT implementations out there are
optimized for software, not hardware.  As a result, you may wind up with a
DCT that is several times bigger and slower than one designed for hardware
implementation.

Nicolas Matringe wrote:

> Do you understand digital electronics? An FPGA is nothing more than a
> bunch of tiny logic elements that you can configure to perform some
> operations, with lots of wires you can configure to link them.
> No magic in here...

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 24151
Subject: Re: Ya tengo mi correo @barcelona.com 4982
From: "Alun" <alun101@DELETEtesco.net>
Date: Thu, 27 Jul 2000 21:17:35 +0100
Links: << >>  << T >>  << A >>
FWIW, a Babelfish translation:

 'Finally and after several traspies already we give to gratuitous
electronico mail your nombre@barcelona.com. Ademas also goes including the
gratuitous connection for rtb, rdsi and rdsi128 That the benefits!
wecspufoxupssrhqdcmhpsoexiimqhjstsfgnvxfqopwshvbszoqdpuusljckgxukcctnsf  '

Alun
camdigital

<pnhxfd@barcelona.com> wrote in message
news:_NAf5.32$Bj5.196@telenews.teleline.es...
> Por fin y tras varios traspies ya damos correo electronico gratuito tu
nombre@barcelona.com.
>
> Ademas va incluida tambien la conexion gratuita para rtb, rdsi y rdsi128
>
> Que las disfrutes!!!
> wecspufoxupssrhqdcmhpsoexiimqhjstsfgnvxfqopwshvbszoqdpuusljckgxukcctnsf
>


Article: 24152
Subject: Re: Spartan-II power consumption
From: Greg Neff <gregneff@my-deja.com>
Date: Thu, 27 Jul 2000 20:31:31 GMT
Links: << >>  << T >>  << A >>
In article <39807D8E.EE410FA6@yahoo.com>,
  rickman <spamgoeshere4@yahoo.com> wrote:
> I think this would tend to be correct, but Xilinx is redesigning the
> chips from the Virtex for lower cost. So I am pretty sure that they
have
> reduced the feature size while keeping the voltage the same.
>
> The datasheet says the Virtex is .22 um. The Spartan II is .18 um
> according to an XCell article, xl35_5.pdf.
>

Yup, based on your reference I stand corrected.  I wonder how they get
away with it?  I was under the impression that 2.5V was too much for
0.18um geometries, hence the requirement for 1.8V power for devices
like Vertex-E, XPLA4, and XC9500XE.  Other Xilinx 2.5V devices
including XC4000XV, XC9500XV, and K2 use 0.25um geometries.

--
Greg Neff
VP Engineering
*Microsym* Computers Inc.
greg@guesswhichwordgoeshere.com


Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 24153
Subject: LFSR as a divider
From: Ben Sanchez <ben@seti.org>
Date: Thu, 27 Jul 2000 14:14:14 -0700
Links: << >>  << T >>  << A >>
I am building a pattern tester for verification of a parallel
optical link's integrity.  My test setup is short the one pulse
per second (1PPS) generator that it was designed to work with. 
As a result I have replaced the 1PPS IOB in my Virtex patgen with
a counter that produces a sync pulse every 10K clocks.

The exact period of the sync pulse is not really important as
long as it is long enough to produce sufficently random data
patterns on the fiber.  Anything longer than about 10K is fine. 
The problem is that the counter I use to produce the sync will
not run fast enough.

I have looked into using the LFSR setup described in Xilinx's
XAPP210 (By Maria George and Peter Alfke), and implementing it
looks simple enough.  The problem is that I don't see how one
gains access to all the bits in the LFSR.  I want to do something
like let the thing free run and output a pulse every time it
comes round to a particular state, say 0.

Does anybody know how to do this?

Thanks for your help.
 
===========================================================
  Ben Sanchez                     Engineer, Lab Manager
  e-mail:  ben@phoenix.seti.org   Project Phoenix
                                  SETI Institute
  Web:     http://www.seti.org    2035 Landings Drive
                                  Mountain View, CA 94043
===========================================================
Article: 24154
Subject: Re: LFSR as a divider
From: Jonas Thor <NoSpamthor@sm.luth.seNoSpam>
Date: Fri, 28 Jul 2000 00:45:13 +0200
Links: << >>  << T >>  << A >>
On Thu, 27 Jul 2000 14:14:14 -0700, Ben Sanchez <ben@seti.org> wrote:

>The exact period of the sync pulse is not really important as
>long as it is long enough to produce sufficently random data
>patterns on the fiber.  Anything longer than about 10K is fine. 
>The problem is that the counter I use to produce the sync will
>not run fast enough.

You could use a pre-scaled counter. Build a small fast counter and use
the carry out from that counter to enable a larger counter. The larger
counter does not have to run at full speed. 

>I have looked into using the LFSR setup described in Xilinx's
>XAPP210 (By Maria George and Peter Alfke), and implementing it
>looks simple enough.  The problem is that I don't see how one
>gains access to all the bits in the LFSR.  I want to do something
>like let the thing free run and output a pulse every time it
>comes round to a particular state, say 0.
>
>Does anybody know how to do this?

Since the LFSRs in the appnote is implemented in SRL16 you cannot read
all bits in the LFSR. You would have to implement the LFSR with
flip-flops in to get access to all bits.

/ Jonas Thor 
Article: 24155
Subject: Re: LFSR as a divider
From: Jonas Thor <NoSpamthor@sm.luth.seNoSpam>
Date: Fri, 28 Jul 2000 00:55:24 +0200
Links: << >>  << T >>  << A >>
Sorry, just thought of an obvious solution to your problem.  Assume
your LFSR is N bits. If you want to decode the all zero state, but can
except a delay, you can use the output of the LFSR to synchronously
reset a counter. Whenever the output of the LFSR is '1' the counter is
reset. When the output is '0' the counter increments. When the counter
reaches the value N, you can decode the all zero state, although N
clocks late.

Well, I'm tired... I guess there are better solutions.

/ Jonas   
Article: 24156
Subject: Re: XCS05XL de Xilinx
From: Pablo Bleyer Kocik <pbleyer@embedded.cl>
Date: Thu, 27 Jul 2000 19:45:43 -0400
Links: << >>  << T >>  << A >>


Vicente Marti wrote:

> Vicente Marti <lavhek@teleline.es> escribió en el mensaje de noticias ...
> > Desearia poder conseguir informacion en castellano sobre la FPGA
> > XCS05XL de Xilinx
> >
> >

    Te recomiendo que aprendas inglés... definitivamente. Lo otro es que uses
algún programa traductor por ahora...

--
Pablo Bleyer Kocik |
pbleyer            |"Rintrah roars & shakes his fires in the burdend air;
      @embedded.cl | Hungry clouds swag on the deep" — William Blake


Article: 24157
Subject: Re: Spartan-II power consumption
From: Bryan Williams <nospamformethanks@nowhere.com>
Date: Thu, 27 Jul 2000 20:36:49 -0400
Links: << >>  << T >>  << A >>
On Thu, 27 Jul 2000 20:31:31 GMT, Greg Neff <gregneff@my-deja.com>
wrote:

>In article <39807D8E.EE410FA6@yahoo.com>,
>  rickman <spamgoeshere4@yahoo.com> wrote:
>> I think this would tend to be correct, but Xilinx is redesigning the
>> chips from the Virtex for lower cost. So I am pretty sure that they
>have
>> reduced the feature size while keeping the voltage the same.
>>
>> The datasheet says the Virtex is .22 um. The Spartan II is .18 um
>> according to an XCell article, xl35_5.pdf.
>>
>
>Yup, based on your reference I stand corrected.  I wonder how they get
>away with it?  I was under the impression that 2.5V was too much for
>0.18um geometries, hence the requirement for 1.8V power for devices
>like Vertex-E, XPLA4, and XC9500XE.  Other Xilinx 2.5V devices
>including XC4000XV, XC9500XV, and K2 use 0.25um geometries.
I think I recall from the Xilinx brainwashing sessions a.k.a. XFest
:P    that the secret to the geometry/voltage confusion was that they
were using .18u metal routing but the gates were kept at .22u so the
voltage thresholds equalled the higher Vcc while getting some of the
shrink possible with .18u design rules.

Just because the process allows finer lines doesn't mean you can't
necessarily make wide ones if you want to, right? 


--Beware, shameless profiteering follows...
If you found the info useful, do us both a favor and
sign up to get paid to surf at:
(hey, it's about $50 a month combined when maxed out!)
http://www.getpaid4.com/?bryguy2000
http://www.alladvantage.com/go.asp?refid=MDE768
Article: 24158
Subject: Re: Question of Virtex DLL
From: Ben Sanchez <ben@seti.org>
Date: Thu, 27 Jul 2000 18:27:02 -0700
Links: << >>  << T >>  << A >>
No.  It can only be driven by a global clock input buffer
(IBUFG), which can only be driven by a Global clock input pin
(there are 4 on a virtex, 8 on VirtexE).  If you want an internal
signal to drive a CLKDLL, you have to bring it out a pin and back
in a clock pin, but all that delay out and in probably defeats
the purpose of the DLL.

-- 
===========================================================
  Ben Sanchez                     Engineer, Lab Manager
  e-mail:  ben@phoenix.seti.org   Project Phoenix
                                  SETI Institute
                                  2035 Landings Drive
  Web:     http://www.seti.org    Mountain View, CA 94043
===========================================================

channing@my-deja.com wrote:
> 
> Hi, Experts,
> 
> Does the CLKDLL in Virtex/Spartan II could be drived by a internal
> signal ?  If so, how to implement this ?
> 
> Thanks.
> 
> Channing Wen
> 
> Sent via Deja.com http://www.deja.com/
> Before you buy.
Article: 24159
Subject: compact PCI Xilinx virtex FPGA card
From: "Raj B Krishnamurthy" <rajk@bellsouth.net>
Date: Thu, 27 Jul 2000 22:14:10 -0400
Links: << >>  << T >>  << A >>
Hi all: Anybody seen a Compact PCI Xilinx Virtex card? I know of a Compact
PCI Altera card and many PCI Virtex developement boards but cannot find
a Compact PCI virtex card.
Any assistance is appreciated.
-- Raj


Article: 24160
Subject: Re: Viewlogic Licencing
From: rickman <spamgoeshere4@yahoo.com>
Date: Fri, 28 Jul 2000 01:33:25 -0400
Links: << >>  << T >>  << A >>
Greg Neff wrote:
> In article <397FBBD0.8EBC4B28@yahoo.com>,
>   rickman <spamgoeshere4@yahoo.com> wrote:
> > I remember from years ago that Viewlogic has a licensing "quirk" (I
> used
> > much stronger language at the time). They had and still seem to have
> two
> > types of licenses. You can get a target specific license which will
> only
> > let the tools work with the libraries for a specific chip vendor's
> > devices. Or if you paid a much higher price you can get a full "board"
> > package that will work with any library including board design libs.
> >
> > The problem was if you paid the big bucks for the board package you
> > could not share any files with a customer who was using a vendor
> > specific version. This was not limited to files that were done for
> other
> > vendor's chips, but even for the libraries that their license was
> > authorize for.
> >
> > The question is, has Viewlogic found a way to deal with this problem?
> > I am thinking about buying a full license for board level design. But
> > there is not much point if I can't share schematics with customers who
> > only have the chip level package.
> >
> 
> The obvious response here is: What does Innoveda (Viewlogic) have to
> say about this?

At the time I had the problem, they were insistent that they had to do
it this way to "protect" their interests. They just couldn't "get it"
that this is such a major PITA that it actually prevented us from buying
more of the full up board stations. I never did "get it" why they
couldn't figure out a way to allow FPGA designs developed on *any*
station to be used and reworked on *any* station licened for FPGA
development. They seemed to think that the one way nature of transfering
designs was an acceptable limitation. 


> In any case, I use ViewDraw and ViewSim (latest versions) for FPGA
> entry and simulation, and I know that the file formats have not changed
> in many years.  Since ViewDraw is also used for board schematic
> capture, I suspect that nothing has changed.
> 
> Maybe you could write a BASIC program to post-process the schematic
> files and replace the line in question.

This is not a solution unless the "key" is cracked. The line in question
is a key that encodes the licensed capabilities and the file name into a
line in the file. The only way I know of to generate a line compatible
with the workstation you were using was to create an empty schematic
with the tools and then to copy that line between the files. To make
this work, I would need *both* types of licenses. I consider this a
MAJOR PITA!!! 

 
> We use OrCAD Capture for board level schematic capture, OrCAD Layout
> for PCB Layout, and Specctra for autorouting.  We also bought OrCAD
> Express, but that was a nightmare.

I made that mistake too. I bought Orcad Express for Xilinx work. After
taking the Orcad training, I was convinced that I should try the VHDL. I
spend two months trying to get it to work before I figured out that I
was not the problem. The problem was a VHDL compiler and simulator that
had a crash half life of 50 new lines of code. 

I dumped the Orcad Express in favor of the Xilinx Foundation Express
package, but had to rewrite the VHDL to get the Metamor compiler to
produce efficient code. Then two months later I had to do it again when
Xilinx switched to the FPGA Express compiler and told me they would no
longer support the Metamor software. That cost me a few grey hairs!!

At this point I don't think I will be buying any more of the Orcad
software. They have a new very restrictive licensing system and I don't
want any more dongles. I have not updated to the new Version 9 system
and expect to switch to something new when I start a new project. 

I like the Xilinx licensing. I have set all my harddrive serial numbers
to be the same. Now I can work on my project on my laptop as well as my
desktop without having to drag around a dongle (and maybe lose it!!!) or
worry about having a particular NIC plugged in. 

 
> Since we use both OrCAD and Viewlogic tools, I have to say that OrCAD's
> entry tools are easier to use, and we are more productive when using
> them.  Also, from a configuration management point of view, OrCAD board
> design files include all schematic sheets and all parts.  You don't
> have to worry about synchronizing libraries with schematic files, and
> it makes it easy to deliver designs.
> 
> OTOH, OrCAD back end tools (i.e. simulators, libraries, and routers)
> are not what you would call 'best in class'.  We build our own parts
> for schematics.  We bolt on Specctra for board routing.  We don't use
> OrCAD at all for FPGA entry and simulation.
> 
> With OrCAD, we were tring to use one EDA vendor for all of our design
> entry and simulation needs.  This didn't work out.  For the time being,
> we will stick with Viewlogic for FPGA schematic entry and simulation,
> and we will stick with OrCAD for board entry and layout.
> 
> I would be interested to hear what other people think of ViewDraw
> versus other board-level schematic capture tools.
> 
> --
> Greg Neff
> VP Engineering
> *Microsym* Computers Inc.
> greg@guesswhichwordgoeshere.com
> 
> Sent via Deja.com http://www.deja.com/
> Before you buy.

-- 

Rick Collins

rick.collins@XYarius.com

Ignore the reply address. To email me use the above address with the XY
removed.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 24161
Subject: Re: Spartan-II power consumption
From: rickman <spamgoeshere4@yahoo.com>
Date: Fri, 28 Jul 2000 01:37:16 -0400
Links: << >>  << T >>  << A >>
My understanding is that the quoted feature size is a horizontal
dimension while the voltage is determined by the vertical dimention of
the oxide thickness. They don't necessarily have to scale one as they
scale the other. It just works better if they do. 

I am not a semiconductor process engineer, so this is just an educated
guess on my part. 


Greg Neff wrote:
> 
> In article <39807D8E.EE410FA6@yahoo.com>,
>   rickman <spamgoeshere4@yahoo.com> wrote:
> > I think this would tend to be correct, but Xilinx is redesigning the
> > chips from the Virtex for lower cost. So I am pretty sure that they
> have
> > reduced the feature size while keeping the voltage the same.
> >
> > The datasheet says the Virtex is .22 um. The Spartan II is .18 um
> > according to an XCell article, xl35_5.pdf.
> >
> 
> Yup, based on your reference I stand corrected.  I wonder how they get
> away with it?  I was under the impression that 2.5V was too much for
> 0.18um geometries, hence the requirement for 1.8V power for devices
> like Vertex-E, XPLA4, and XC9500XE.  Other Xilinx 2.5V devices
> including XC4000XV, XC9500XV, and K2 use 0.25um geometries.
> 
> --
> Greg Neff
> VP Engineering
> *Microsym* Computers Inc.
> greg@guesswhichwordgoeshere.com
> 
> Sent via Deja.com http://www.deja.com/
> Before you buy.

-- 

Rick Collins

rick.collins@XYarius.com

Ignore the reply address. To email me use the above address with the XY
removed.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 24162
Subject: Re: LFSR as a divider
From: rickman <spamgoeshere4@yahoo.com>
Date: Fri, 28 Jul 2000 01:47:58 -0400
Links: << >>  << T >>  << A >>
"K.Orthner" wrote:
> 
> Assuming that you're not extremely short on space, you could buile a LFSRout
> of regular FF's and logic blocks.  Because the logic for a LFSR is much
> simpler that for a counter, it would still be able to run significantly
> faster.
> 
> The VHDL code would look something like this:
> 
> process lfsr (rst, clk )
> begin
>   if rst = '1' then
>       lfsr_reg <= '1' & (others => '0');
>       -- Note: You have to reset the lfsr to non-zero.
> 
>   elsif rising_edge( clk ) then
>       lfsr_reg <=  (lfsr_reg(0) xor lfsr_reg(2) ) & lfsr_reg (lfsr_size-1
> downto 1);
>       -- Another note: This is going left-to-right.
> 
>   end if;
> end process;
> 
> You can then just AND together all of the bits of lfsr_reg, which will give
> you a pulse when the entire LFSR is '1'.  (The all-zero state will never
> happen).

You were doing pretty well untill you suggested ANDing all the bits of
the counter. If the standard fast carry counter is speed limited, then a
14 input AND gate will be the speed limiting logic. This would need to
be pipelined and likely floorplanned. 

Actually, I can't see how a 14 bit fast carry counter could be too slow
for this application. The fast carries are very fast and with only 14
bits are likely to be nearly as fast as the LFSR. How many bits were
being used in the counter?

 
> A length of 14 bits should give you a pulse once every 16383 clk cycles.
> 
> -Kent
> 
> P.S.  I haven't read the app note, so my implementation may look nothing at
> all like Xilinx's.
> 
> > I have looked into using the LFSR setup described in Xilinx's
> > XAPP210 (By Maria George and Peter Alfke), and implementing it
> > looks simple enough.  The problem is that I don't see how one
> > gains access to all the bits in the LFSR.  I want to do something
> > like let the thing free run and output a pulse every time it
> > comes round to a particular state, say 0.
> >
> > Does anybody know how to do this?
> 
> ------------
> Kent Orthner

-- 

Rick Collins

rick.collins@XYarius.com

Ignore the reply address. To email me use the above address with the XY
removed.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 24163
Subject: Re: implementation problem of Foundation 2.1i
From: felix_bertram@my-deja.com
Date: Fri, 28 Jul 2000 06:16:08 GMT
Links: << >>  << T >>  << A >>
Daixun,

there is a command line tool named xflow, which encapsulates the
complete Xilinx flow. It is documented in the Development System
Reference Guide (dev_ref.pdf).

Summarized you'll have to:
* Copy your netlists and constraints to the same directory
* Call xflow with the following command line:
xflow -implement balanced.opt -config bitgen.opt -tsim generic_vhdl.opt
toplevel.edf


Kind regards

Felix Bertram

In article
<5FE97DD96380D111821E00805F2720E90170DE6A@endor.ee.surrey.ac.uk>,
  daixun.zheng@eim.surrey.ac.uk (Daixun Zheng) wrote:
> Because the F2.1i only support the subsets of VHDL.  So I have to
synthesis
> my VHDL core by using Leonardospectrum and got the *.edf. The target
device
> is Virtex V800hq240.
> Then I want to implement this core in F2.1i.
> The steps are:
> 	1. Create a new project
> 	2. add the *.ucf and *.edf to this project (But at this time,
the
> manual tools>implementations> are all unenable, so I can not use
the 'Flow
> Engine'.
> 	3. I have to click 'implementation' button, then I got the
message
> 'can't create the chip'
>
> How can I do that by using the *.edf and *.ucf in F2.1i?
>
> Thanks a lot!
>
> Daixun
>
> --
> Posted from IDENT:exim@prue.eim.surrey.ac.uk [131.227.76.5]
> via Mailgate.ORG Server - http://www.Mailgate.ORG
>


Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 24164
Subject: Re: implementation problem of Foundation 2.1i
From: Klaus Falser <kfalser@durst.it>
Date: Fri, 28 Jul 2000 06:36:17 GMT
Links: << >>  << T >>  << A >>
In article
<5FE97DD96380D111821E00805F2720E90170DE6A@endor.ee.surrey.ac.uk>,
  daixun.zheng@eim.surrey.ac.uk (Daixun Zheng) wrote:
> Because the F2.1i only support the subsets of VHDL.  So I have to
synthesis
> my VHDL core by using Leonardospectrum and got the *.edf. The target
device
> is Virtex V800hq240.
> Then I want to implement this core in F2.1i.
> The steps are:
> 	1. Create a new project
> 	2. add the *.ucf and *.edf to this project (But at this time,
the
> manual tools>implementations> are all unenable, so I can not use
the 'Flow
> Engine'.
> 	3. I have to click 'implementation' button, then I got the
message
> 'can't create the chip'
>
> How can I do that by using the *.edf and *.ucf in F2.1i?
>
> Thanks a lot!
>
> Daixun
>
> --
> Posted from IDENT:exim@prue.eim.surrey.ac.uk [131.227.76.5]
> via Mailgate.ORG Server - http://www.Mailgate.ORG
>

Maybe you have done it the wrong way.

a) You must use the Design Manager, not the Project Manager.
b) Create a new project, specifing your *.edf file a "input design".
c) After choosing Design/Implement or "New version", select "Custom"
for the field "Constraints file". This allows you to insert the name of
your *.ucf file.

Hope this helps
   Klaus

--
Klaus Falser
Durst Phototechnik AG
I-39042 Brixen


Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 24165
Subject: Re: Variable shifting
From: rickman <spamgoeshere4@yahoo.com>
Date: Fri, 28 Jul 2000 02:44:20 -0400
Links: << >>  << T >>  << A >>
Ray Andraka wrote:
> 
> Well, the F5 mux is still a 2 input mux, sure you get it for "free", but that is beside my
> point.   Consider the simple case of a 4 input rotator (a barrel shift with the inputs
> 'wrapped around').  If you implement it in 4 input muxes, you need 4 of them right.  That
> is 4 slices or 4 CLBs depending on the xilinx architecture, fine.   If you use 2 input
> muxes in a merged tree, the first layer uses 4, and the second layer uses 4, for a total
> area that is the same as that of the case using 4 input muxes, but without using the F5
> muxes.   The difficulty with using the F5 muxes is that you don't get to share the terms.

I understand what you are saying, but the "free" muxes are *exactly* my
point. But in any case but the most trivial, the merged tree is better
than the non-merged tree. As you get larger, the merged tree is *much*
better. But you can do a merged tree with 4 input muxes as well. 

       2mux   4mux
bits   CLBs   CLBs
4       2      2
8       6      6 (one layer of 2mux)
16     16     16
32     40     40 (one layer of 2mux)
64     96     96

So it looks like a merged tree of 4mux and 2mux uses the same number of
CLBs, but certainly the 4mux approach uses fewer routes since some of
them are internal to the CLBs.  

Routing congestion is a little hard to measure. If you count pins that
must be connected, it is 10 * CLBs for the 4mux and 12 * CLBs for the 2
mux. The net counts are N*(log4(N)+1) for the 4 mux and N*(log2(N)+1)
for the 2 mux.

         2mux       4mux
bits   Nets Pins  Nets Pins
4       12   24     8   20
16      80  192    48  160
64     448 1,152  256  960

I can't tell if this is a significant difference or not. I suspect that
the pin count is more important than the net count, but is is probably a
mixture of both. So the difference is there, and can approach a factor
of two in favor of the 4 mux as the size gets large. For the smaller
shifters it is likely not a big issue. 

The 8 input mux performs worse than either in terms of CLB count. It
will use 4/3 the CLB count of the 2mux and 4mux, the same pin count as
the 2mux and only slightly fewer nets than the 4mux. So it does not look
like it has any advantage over the 4mux approach. 


-- 

Rick Collins

rick.collins@XYarius.com

Ignore the reply address. To email me use the above address with the XY
removed.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 24166
Subject: Re: LFSR as a divider
From: "K.Orthner" <nospam@ihatespam.com>
Date: Fri, 28 Jul 2000 15:45:50 +0900
Links: << >>  << T >>  << A >>
> You were doing pretty well untill you suggested ANDing all the bits of
> the counter. If the standard fast carry counter is speed limited, then a
> 14 input AND gate will be the speed limiting logic. This would need to
> be pipelined and likely floorplanned.

I was going to suggest pipelining the AND . . but i figured if you can use
6-input LUTs, then a 14-bit AND is only 2 logic levels . . . . how slow can
it be?

<checking the book quick to make sure you can use 6-input LUTs . . . >

Okay.  Looks like no 6-input LUTs.
But still, two levels of 2-input LUTs?

o <= (i0 * i1 * i2 * i3) * (i4 * i5 * i6 * i7) * (i8 * i9 * i10 * i11) *
(i12 * i13)

That's not *that* bad, is it?  How fast are you trying to run your counter?

------------
Kent Orthner



Article: 24167
Subject: Re: LFSR as a divider
From: rickman <spamgoeshere4@yahoo.com>
Date: Fri, 28 Jul 2000 03:17:23 -0400
Links: << >>  << T >>  << A >>
"K.Orthner" wrote:
> 
> > You were doing pretty well untill you suggested ANDing all the bits of
> > the counter. If the standard fast carry counter is speed limited, then a
> > 14 input AND gate will be the speed limiting logic. This would need to
> > be pipelined and likely floorplanned.
> 
> I was going to suggest pipelining the AND . . but i figured if you can use
> 6-input LUTs, then a 14-bit AND is only 2 logic levels . . . . how slow can
> it be?
> 
> <checking the book quick to make sure you can use 6-input LUTs . . . >
> 
> Okay.  Looks like no 6-input LUTs.
> But still, two levels of 2-input LUTs?
> 
> o <= (i0 * i1 * i2 * i3) * (i4 * i5 * i6 * i7) * (i8 * i9 * i10 * i11) *
> (i12 * i13)
> 
> That's not *that* bad, is it?  How fast are you trying to run your counter?
> 
> ------------
> Kent Orthner

I agree that this is not all that slow. But it will be slower than a
LFSR. The LFSR is designed to only use a single level of logic for the
feedback and the FFs can be arranged to minmize the routing delays. The
14 input AND gate will make it harder to get short routes. Often the
route delays are longer than the logic delays. 

In this design I can't see where a 14 bit fast counter will be too slow
for nearly any application. I would like to know what clock rate is
being used. Especially if a count of 10,000 is roughly equal to 1
second!!! I may have misread that part...


-- 

Rick Collins

rick.collins@XYarius.com

Ignore the reply address. To email me use the above address with the XY
removed.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 24168
Subject: JTAG Technologies Boundary-Scan Test
From: Franz Hollerer <hollerer@hephy.oeaw.ac.at>
Date: Fri, 28 Jul 2000 11:03:35 +0200
Links: << >>  << T >>  << A >>
hi,

I want to generate an infrastructure test with JTAG Tech. VIP manager.
I used vl2jtag (viewlogic interface from JTAG Tech.) to generate
an edif file. When I click 'Generate Infra Test' the VIP manager stops
with the error message 'No TDO-net found.' (see below).
but this net exists in my design and i have also found it in the edif
file.

any idea?

thx
franz hollerer

-----------------------------------------------------------------------
file bld.msg (contains warnings and errors)
-----------------------------------------------------------------------
Datasheets selection:          jtagtest.sel
General data:                  jtagtest.gen
Device information:            jtagtest.dif
Options:                       -h -nd -nc

Reading BST datasheets

    10k30aq208.dsh
    18245a.dsh
    npsc110f.dsh
    xcv50_pq240.dsh

#### BEGIN COMPONENT CHECK ON "EFP10K30A"
#### END COMPONENT CHECK ON "EFP10K30A" : NO ERRORS

#### BEGIN COMPONENT CHECK ON "ABT18245A"
#### END COMPONENT CHECK ON "ABT18245A" : NO ERRORS

#### BEGIN COMPONENT CHECK ON "SCANPSC110F"
#### END COMPONENT CHECK ON "SCANPSC110F" : NO ERRORS

#### BEGIN COMPONENT CHECK ON "XCV50_6"
 Warning >> Pin "PROGRAM_B" has BST type but is not specified in the
Boundary-Scan Register
#### END COMPONENT CHECK ON "XCV50_6" : 1 WARNINGS

EBST Status :
 Global errors      : 0
 Global warnings    : 0
 Board errors       : 0
 Board warnings     : 0
 Component errors   : 0
 Component warnings : 1
 Sum of errors      : 0
 Sum of warnings    : 1

Reading EDIF netlist 'jtagtest.edf'

EDIF Status :
 Sum of errors      : 0
 Sum of warnings    : 0


Matching pins of component 'XCV50_6' by pin number
WARNING 1148: No match between some netlist-pin(s) and any EBST-pin was
found
    Part 'U15_jtagtest_sheet4'

Processing netlist and datasheets

Processing device information file

Starting BST chain calculations
ERROR   1160: No TDO-net found for this design

BLD_GEN Status :
 Sum of errors      : 1
 Sum of warnings    : 1

BLD_GEN *** Exit on Error ***

--
Institut fuer Hochenegiephysik
Nikolsdorfer Gasse 18
1050  Wien
Austria

Tel: (+43-1)5447328/50


Article: 24169
Subject: 5V Lattice 1032E and 3.3V compatability
From: John Chambers <JohnC@ihr.mrc.ac.uk>
Date: Fri, 28 Jul 2000 11:14:46 +0100
Links: << >>  << T >>  << A >>
I need to interface a Lattice 1032E to a 3.3V processor.  I know I could
buy a 3.3V part but I happen to have a 5V device.  I've measured the pin
output voltage of the 5V part and it never goes above 3.3V.  Has anyone
tried a 5V/3.3V interface with a 5V Lattice CPLD?

John
Article: 24170
Subject: Re: Which one is good coding style?
From: eml@riverside-machines.com.NOSPAM
Date: Fri, 28 Jul 2000 10:52:14 GMT
Links: << >>  << T >>  << A >>
On Thu, 27 Jul 2000 09:43:02 -0400, rickman <spamgoeshere4@yahoo.com>
wrote:

>Renaud Pacalet wrote:
>> You're welcome. I learnt something today too: nobody seems able to
>> explain me why left and right are inverted in a mirror but not top
>> and bottom ;-)
>
>If you invert both top and bottom as well as left and right, you get
>back the original image. In reallity you can not distinguish which
>direction is inverted and which is not. The image is a mirror image,
>with top above, bottom below, left on your left and right on your right.
>So to say the left right direction is inverted is not correct. That is
>based on your frame of reference.

All the answers say much the same thing, but I think this may be
easier to understand:

Mirrors have no concept of up, down, left, or right - they just invert
in the obvious way. 

The problem is then one of explaining how a person makes sense of what
they see when they look in the mirror. When you look in a mirror, you
see yourself, looking back at you. To make sense of this, you mentally
rotate yourself 180 degrees, to put yourself in the shoes of the
person looking at you. If you then wave your right hand, and put
yourself in the shoes of the person looking at you, you think they're
waving their left hand. In other words, the problem is simply
psychological, and arises because humans have a vertical left-right
line of symmetry (the line you rotate yourself about to put yourself
in the other person's shoes).

Interesting problem: on planet Zorg, the inhabitants have a horizontal
line of symmetry, rather than a vertical line of symmetry, as we do.
They also have mirrors. What do they see when they look in the mirror?

Evan

Article: 24171
Subject: Re: Pad trireg in XLA FPGA
From: eml@riverside-machines.com.NOSPAM
Date: Fri, 28 Jul 2000 10:53:11 GMT
Links: << >>  << T >>  << A >>
On Wed, 26 Jul 2000 17:09:42 -0700, "Andy Peters"
<apeters.Nospam@nospam.noao.edu.nospam> wrote:

>Hey, Synopsys, how 'bout this: instead of working on stuff like "incremental
>synthesis," which I couldn't really care less about, howsabout doing a
>better job of understanding the chips' architectures, and thus taking
>advantage of the neat-o features that Xilinx thoughtfully put in there for
>us?
>
>Oh, I get it, it's the Microsoft plan.  Consider: Xilinx includes FPGA
>Express with the tools, and Synopsys (possibly correctly) assumes that the
>average designer creating average designs (i.e., those that don't "push the
>envelope," as Ray would say) won't really need the extra strength that
>Synplicity has, or more likely, the designer won't be able to justify (to
>the Boss) the cost of "another tool that does what something you already
>have does."

I would really like to find out how Xilinx justified the Synopsys OEM
agreement. Did they ask any users for feedback on synthesis tools? Did
the marketing guys actually ask any engineers inside Xilinx what they
thought of Synopsys? Obviously not. It's been obvious for years that
Synopsys was way behind on both language and FPGA support, but someone
decided that Synopsys's dominance in the ASIC world somehow made up
for these minor deficiencies. The end result is that entry-level users
have to put up with a second-rate tool, and Xilinx has to waste a lot
of time supporting it.

I'd also like to find out why Xilinx is keeping so quiet about XST
(X's own 3.1-bundled synth tool). I haven't used it yet but, by at
least one authoritative account, it's better than FPGA Express. The
reason is presumably that they have to keep Synopsys, Synplicity, and
Exemplar sweet, and giving away a good synthesiser is hardly going to
help. This may be reasonable, but it means that we're going to carry
on paying high prices for synthesisers that don't understand Xilinx
architectures.

Evan
Article: 24172
Subject: Re: Pad trireg in XLA FPGA (beating a horse to death)
From: eml@riverside-machines.com.NOSPAM
Date: Fri, 28 Jul 2000 10:54:19 GMT
Links: << >>  << T >>  << A >>
On Thu, 27 Jul 2000 14:30:05 -0400, rickman <spamgoeshere4@yahoo.com>
wrote:

>Andy Peters wrote:
>> >I don't understand why the synthesis vendors need to deal with it. This
>> >should be a map, place and route issue which is done in the Xilinx
>> >tools. The synthesis tools only need to generate the FF and the MPR
>> >tools can put it where it will work best.
>> 
>> In this particular case, I guess the place-and-route tool should notice that
>> this particular CLB (with the inverter) drives the tristate enable of, oh,
>> 32 output pins, and you'd think that it figure out that the CLB could be
>> eliminated.  But, you'd also think that the synthesis tool should understand
>> the chip architecture better and not put the inverter in there in the first
>> place!
>
>My understanding is that when it comes to inverters, the MPR tools are
>supposed to be smart. They have always been capable of moving inverters
>into any useful place, like the tristate control for example. But then I
>may be thinking of a different vendor, Lucent. I know they support a
>polarity selection on the tristate controls. I don't remember if Xilinx
>does or not. That is why I hate using many of the tools. They expect you
>to remember all the grody details of the parts in order to get maximum
>utility from them. 

I've never found any documentation on what exactly the mapper can and
can't do, or whether it's possible to stop it doing whatever it can
do. However, I have experimented with one case in Virtex, which showed
that the mapper could deal with inversions on a net which led directly
to an invert-select mux, but it couldn't trace backwards to fix the
problem if there wasn't a mux in the forwards direction. This may (or
may not) imply that it has limited ability to restructure existing
logic, as Andy wanted.

My test case had an FDCE in an IOB, driving the enable on an OBUFT (in
other words, I have a registered output enable, all in an IOB). If you
put an inverter between the FDCE and the OBUFT, the mapper complains
that this is an improper connection. This is true, since the
invert-select isn't directly at the output buffer, but the mapper
could have fixed this at the FDCE by inverting the data input and
changing the reset polarity.

Even if the mapper could restructure existing logic, how far would it
have to go? It might be reasonable to detect LUTs which only contain
an inverter, and remove them. It might be reasonable to invert F/F
data inputs and reset polarities. It might be reasonable to De
Morgan-ise an entire logic function, if it turns out that a free
inverter somewhere will save a few LUTs. However, this will cause
other inversions, which will propagate backwards, and the synthesiser
should have done all this work already. You have to draw the line
somewhere.

Evan
Article: 24173
Subject: FPGAExpress fe_shell and FSM encoding
From: Klaus Falser <kfalser@durst.it>
Date: Fri, 28 Jul 2000 11:38:40 GMT
Links: << >>  << T >>  << A >>
Does anybody know how to force FPGA-Express to use binary encoding from
command line mode?

I'm using FPGA Express 3.3 and Xilinx Foundation F2.1.
From the GUI I can specify the encoding type (one hot, binary .. ) for
FSM's, but I have not found how to do this from the command line tool
fe_shell.
This would be useful for me since I like to do the whole synthesis
process with makefiles.

Every help is appreciated
   Klaus

--
Klaus Falser
Durst Phototechnik AG
I-39042 Brixen


Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 24174
Subject: Re: FPGAExpress fe_shell and FSM encoding
From: Dave Vanden Bout <devb@xess.com>
Date: Fri, 28 Jul 2000 08:45:40 -0400
Links: << >>  << T >>  << A >>
Try this:

proj_fsm_coding_style = "onehot"

or

proj_fsm_coding_style = "binary"

We have a document on using makefiles with Foundation at
http://www.xess.com/manuals/fndmake.pdf.



Klaus Falser wrote:

> Does anybody know how to force FPGA-Express to use binary encoding from
> command line mode?
>
> I'm using FPGA Express 3.3 and Xilinx Foundation F2.1.
> From the GUI I can specify the encoding type (one hot, binary .. ) for
> FSM's, but I have not found how to do this from the command line tool
> fe_shell.
> This would be useful for me since I like to do the whole synthesis
> process with makefiles.
>
> Every help is appreciated
>    Klaus
>
> --
> Klaus Falser
> Durst Phototechnik AG
> I-39042 Brixen
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.

--
|| Dr. Dave Van den Bout   XESS Corp.               (919) 387-0076 ||
|| devb@xess.com           2608 Sweetgum Dr.        (800) 549-9377 ||
|| http://www.xess.com     Apex, NC 27502 USA   FAX:(919) 387-1302 ||




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search