Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 96725

Article: 96725
Subject: Re: Need help with generating video patterns using VHDL
From: John_H <johnhandwork@mail.com>
Date: Thu, 09 Feb 2006 15:31:33 GMT
Links: << >>  << T >>  << A >>
methi wrote:

> Hi,
> I am currently working on generating digital video using an FPGA.
> I am using VHDL to generate color bars and other test signal patterns.
> I am having a problem with the color bar video.
> When I look at the vector diagram of this video using a digital video
> analyzer, I see that the color vectors are curved and not straight.
> My Fpga also gets a reference video signal as input and my output video
> is timed horizontally and vertically according to the reference.
> What could possibly be the reason for the curved vectors.
> Any idea as to how I can make it straight.
> Any feedback is greatly appreciated.
> Thank you,
> Methi

Since a "color vector" is defined by a single point and the origin in 
the digital video analyzer for a single color, is this the multiple 
vectors for a sweep?

I'm guessing here, but your color may be skewed on composite sweeps due 
to incorrect (absent?) gamma correction.  Does your analyzer expect a 
certain gamma?

Article: 96726
Subject: Re: vhdl to edif
From: "Brannon" <brannonking@yahoo.com>
Date: 9 Feb 2006 08:42:42 -0800
Links: << >>  << T >>  << A >>
> 1. compile vhdl file into ngc file  >> say myngc.ngc
> 2, "nbdbuild myedif.edf  myngc.ngc" to combine the edif file and the ngc
> file into a single .ngd file
> 3. use xst on the ngd file?

Concerning your step two, you have to declare one of the two as a black
box in the other. You can do this in EDIF by just having an interface
declared with no contents section. The object containing the black box
will be the top-level object. Send that top-level file to ngdbuild with
the other file in the same folder. ngdbuild will automatically merge
them. You'll see it in the log.

I usually use ngc2edif to make an ngc file a black box in my EDIF; I
just take the output of the ngc2edif and chop it down to the (cell ...
(interface ....) declaration, put two parenthesis on the end, and then
paste that into my EDIF file.


Article: 96727
Subject: Re: vhdl to edif
From: Duane Clark <dclark@junkmail.com>
Date: Thu, 09 Feb 2006 16:59:20 GMT
Links: << >>  << T >>  << A >>
Leow Yuan Yeow wrote:
> Thanks for all your patience! I have never used the command line before, and 
> the gui seems to only allow a project to have files of the same type: vhdl 
> source files, schematic, edif, or ngc/ngo.
> Is this what I should be doing?
> 1. compile vhdl file into ngc file  >> say myngc.ngc
> 2, "nbdbuild myedif.edf  myngc.ngc" to combine the edif file and the ngc 
> file into a single .ngd file
> 3. use xst on the ngd file?
> 
> I have looked at the XST manual for command line...but it looks like greek 
> to me. Am I supposed to learn how to do it my trial and error and figure it 
> out myself here or is there any some better tutorial out there?
> Thanks!

As long as you are going to use ngc files, you might as well use the 
gui. The gui does not require that the files have the same type. I don't 
use schematics, but I have projects that include all kinds of 
combinations of VHDL, Verilog, edif, and ngc files.

I am assuming you have an edif file from another source, and you want to 
use it with some code you have in VHDL?

In your VHDL code, just put an entity declaration in for the edif file, 
and in the architecture body, connect it up. The name of the entity 
should match both the name of the edif file and the name of the top 
level entity within the file You can add declarations specifying that it 
is a "black box", but that is not necessary with recent versions of ISE. 
Put the edif file in some directory within your project.

Create your ISE project as usual. In the sources window, the edif file 
will show up as a "?". That is ok. Select your top level file, then 
select the "Translate" step in "Implement Design". Right click. In the 
box labeled "Macro Search Path", click browse, and find the directory 
with your edif file. Once that is set, run ISE as normal. When it gets 
to the translate step, ngdbuild will find the edif file and stick it in 
for you.

Article: 96728
Subject: Re: latest XILINX WebPack is totally broken
From: "richard" <richard@idcomm.com>
Date: 9 Feb 2006 09:16:35 -0800
Links: << >>  << T >>  << A >>
Sadly, you're quite mistaken.

I've been in contact with the local XILINX sales office and their FAE
agrees that v7.1 was too slow to be of any use, and v8.1 is so badly
messed up that one can't really do much with it at all.  I've gone back
to using v6.3.03, which seems, aside from long-standing issues that
I've reported, some dating back as far as release 4.2 yet have never
been addressed, such as errors in netlisting, etc.  The most annoying
problems I've encountered have been associated with ECS, but there have
been others, e.g. intermittent failure of the software to adhere to
Windows conventions with respect to cut and paste, etc, (I could go on,
but what's the use).

I've had over a dozen cases, probably closer to three dozen, open over
the last year, and certainly quite a few more than that in years past.
Only one or two have been resolved in any sense, the remainder having
been escalated via CR's, but still remaining unresolved.  They're
always scheduled for repair in "the next release" but that seldom
happens.  I have observed that if I don't complain, nothing gets fixed.
 Of course, if I do complain, there's no guarantee, but I have to go on
record.

In 8.1, several major things that immediately impair progress have been
introduced.  I've complained about that, but I haven't time to do
XILINX' work for them.  They need to put a few dozen, or perhaps a few
hundred, people who've used software to do useful work, and not just to
create more software, on the task testing this software against written
documents which provide specific criteria consistent with how it is
supposed to work.

In release 5.0 (2002) I reported that ECS fails to change line-draw
mode when the user selects "manual mode."  The workaround is still the
only way of dealing with that, and the workaround is to leave line-draw
mode and then re-enter it.  That sets manual mode.  Going back to
autorouting mode requires the same set-exit-reenter sequence.
Likewise, I complained in 2002 about the fact that bus taps, which
won't always align with the line-draw grid.  That's still a problem.
The workaround for that is to exit ECS and then re-enter, whereupon the
bus-tap to line registration is resolved.  There are numerous others
that work similarly.  ECS occasionally reports errors that it can't
show on the display.  When one exits ECS and re-enters it, they're
gone.  The auto-increment/decrement of signal names has, since v4.2,
been randomly fouled up.  Sometimes, when you want it to increment, it
decrements, and vice-versa.  It's not entirely predictable, but it goes
the wrong way more often than not.  Since this is their "try it before
you buy it" package, I've not bought XILINX software since v3.1 of
Foundation, which wasn't a real value either.

Only today, I finally got a reply from the tech support people
regarding the v8.1 sheet size management, which I reported on the
weekend.  When you change sheet size, v8.1 doesn't fully recognize the
change.  Consequently, as you place symbols, it repeatedly complains
that the symbol is placed outside the sheet boundaries, which certainly
isn't visible.  In order to get the display to reflect the sheet size,
you have to exit ECS and then re-enter.

Of course, one could live with that one, but, combined with the fact
that some genius decided it would be "cool" to make the various process
menus moveable, which doesn't help anything, yet make them so small,
vertically, relative to the entire window, in the default that you
can't even read title, then hide the boundary, which, BTW, is not
located in the conventional place, so you can't reasonably be expected
to expand it, and then make the window overlap rather than border the
companion menu to which you concurrently need access.   Now, I use a
1600x1200 display because I'm getting too old to use finer resolution,
but I think I should be able to see what's in front of me when I'm at
the default configuration.

Clearly, nobody even cursorily tested the ECS module when this tool
suite was released.

There are problems with the associated ModelSim, as well, but I'm not
going to describe 'em here, and I'm not going to mess with that
complete mistake that XILINX released until I hear rumors that it's
working, which may be a while.  In the meantime, I'll use the older
stuff that seems to work O.K.

Bitching about XILINX software has done so little good that I don't
even get any relief from the frustration of having no other option than
to switch to Altera.  Their stuff isn't perfect either.  <sigh>

Richard


Article: 96729
Subject: Re: latest XILINX WebPack is totally broken
From: "richard" <richard@idcomm.com>
Date: 9 Feb 2006 09:30:21 -0800
Links: << >>  << T >>  << A >>
The "real" problem lies in that the folks who manage and those who
generate software tools are seldom folks who have extensive experience
with using tools of that genre.  I've often had the unfortunate
experience of dealing with software "engineers" who insist on putting
in features not required for the product to work as specified, "stubs"
for future unspecified features, etc. and yet have omitted features
without which the product was useless.

I think a proper treatment for software generators and their bosses
would be to lock them in a room with their product and a task
assignment, and not allow them to leave the room for any reason,
including bathroom breaks, until (1) their work was fully verified
against the objective specification,  (2) they had completed their
task, in this case, perhaps, generation, verification, and
implementation of a 2M-gate FPGA which required the use of each and
every claimed gate in the FPGA (maybe locking a marketing manager in
with them would help, so they'd have something/someone to eat) on a 16
MHz '386 with the minimum of RAM and HDD space.  You might as well weld
the doors shut, because that would never happen.

What puzzles me is how they can take such a giant step backward.  Sure,
there were some bugs.  Their support people always denied that they
were bugs, but they were.  Whenever it internally interprets a simple,
A=>A, B=>B, signal routing as A =>B, B =>A, it's obviously a bug.  They
deny it and try to sweep it under the rug, but it's a bug.  It just
reproduces under the rug and becomes a colony of them.

Richard


Article: 96730
Subject: Re: why does speed grade effect VHDL program??
From: "ernie" <ernielin@gmail.com>
Date: 9 Feb 2006 09:31:31 -0800
Links: << >>  << T >>  << A >>
Hi,

1. My bad.  You should put "CLK" in the To name, not the From name.

2. You open the assignment editor by typing Ctrl-Shift-A.  I have v5.1,
but I don't think the keyboard shortcut has changed.

3. At the top, make sure that the "All" or "Logic Options" filter is
selected.  I think it defaults to "Pin" the first time you open the
editor.

How are you generating the clock for the data?  If you're bit-banging,
that'll be a real headache to deal with...it's better to generate a
strobe or some kind of asynchronous data valid signal.

Cheers.


Article: 96731
Subject: Lattice new ECP2 parts
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 9 Feb 2006 10:02:15 -0800
Links: << >>  << T >>  << A >>
Seems these are too new to even be in production, but they are very
interesting.  Anyone have any pricing on them yet?  I would be
interested in the 6 or 12 in the 256 pin FBGA.  They run on 1.2 volts
core.  I don't know of an ARM CPU to match that.  I would hate to have
to add another power supply voltage.


Article: 96732
Subject: Re: cheap USB analyzer based on FPGA
From: "Andy Peters" <Bassman59a@yahoo.com>
Date: 9 Feb 2006 10:48:15 -0800
Links: << >>  << T >>  << A >>
colin wrote:

> I'm curious as to what your trying to achieve, I might be completely
> wrong about this but I feel some dongle cracking in the works!

If you're doing USB device development, a bus analyzer is an absolute
requirement.  The least expensive useful bus analyzer is the Ellisys
Tracker 110 and it costs almost $1000.

I guess $1000 is a bit expensive if you're trying to crack dongles ...

-a


Article: 96733
Subject: Re: cheap USB analyzer based on FPGA
From: "Jerome" <nospam@nospam.fr>
Date: Thu, 9 Feb 2006 20:31:33 +0100
Links: << >>  << T >>  << A >>
NO NO , i'm NOT trying to crack anything
I just want to make USB  devpt (using libusb) and i need an usb analyzer
I'm sure it is feasible with a $200 FPGA which is 1/3 price of the cheapest 
USB analyser


"Andy Peters" <Bassman59a@yahoo.com> wrote in message 
news:1139510895.770856.229330@f14g2000cwb.googlegroups.com...
> colin wrote:
>
>> I'm curious as to what your trying to achieve, I might be completely
>> wrong about this but I feel some dongle cracking in the works!
>
> If you're doing USB device development, a bus analyzer is an absolute
> requirement.  The least expensive useful bus analyzer is the Ellisys
> Tracker 110 and it costs almost $1000.
>
> I guess $1000 is a bit expensive if you're trying to crack dongles ...
>
> -a
> 



Article: 96734
Subject: Re: vhdl to edif
From: "Leow Yuan Yeow" <nordicelf@msn.com>
Date: Thu, 9 Feb 2006 11:35:20 -0800
Links: << >>  << T >>  << A >>
Thanks for all your patience! I have never used the command line before, and 
the gui seems to only allow a project to have files of the same type: vhdl 
source files, schematic, edif, or ngc/ngo.
Is this what I should be doing?
1. compile vhdl file into ngc file  >> say myngc.ngc
2, "nbdbuild myedif.edf  myngc.ngc" to combine the edif file and the ngc 
file into a single .ngd file
3. use xst on the ngd file?

I have looked at the XST manual for command line...but it looks like greek 
to me. Am I supposed to learn how to do it my trial and error and figure it 
out myself here or is there any some better tutorial out there?
Thanks!

YY

"Duane Clark" <dclark@junkmail.com> wrote in message 
news:zhtGf.23106$Jd.21261@newssvr25.news.prodigy.net...
> Leow Yuan Yeow wrote:
>> Hi, may I know whether there is any free program that is able to convert 
>> a vhdl file to a .edf file? I am unable to find such options in the 
>> Xilinx ISE Navigator. I have tried using the Xilinx ngc2edif convertor 
>> but when I tried to generate a bit file from the edf file its says:
>>
>> ERROR:NgdBuild:766 - The EDIF netlist 'synthetic2.edf' was created by the
>> Xilinx
>>    NGC2EDIF program and is not a valid input netlist.  Note that this 
>> EDIF
>>    netlist is intended for communicating timing information to 
>> third-party
>>    synthesis tools. Specifically, no user modifications to the contents 
>> of
>> this
>>    file will effect the final implementation of the design.
>> ERROR:NgdBuild:276 - edif2ngd exited with errors (return code 1).
>> ERROR:NgdBuild:28 - Top-level input design file "synthetic2.edf" cannot 
>> be
>> found
>>    or created. Please make sure the source file exists and is of a
>> recognized
>>    netlist format (e.g., ngo, ngc, edif, edn, or edf).
>>
>> Any help is appreciated!
>
> Xilinx's XST can be told to generate edif instead of ngc, though since 
> ngdbuild can understand the ngc format, I am not sure what you expect to 
> gain by doing it. You can combine ngc and edif files with ngdbuild, and it 
> should combine them fine.
>
> Anyway, XST takes an "-ofmt" parameter, which can be set to "NGC" or 
> "EDIF". However, the gui does not provide a method for doing that, so you 
> would need to execute XST from the command line. 



Article: 96735
Subject: Re: latest XILINX WebPack is totally broken
From: Steve Lass <lass@xilinx.com>
Date: Thu, 09 Feb 2006 14:06:05 -0700
Links: << >>  << T >>  << A >>
Richard,

After further investigation, I see that you have entered cases under a
different email address.  I'll look into them and get back to you.

Steve

richard wrote:
> Sadly, you're quite mistaken.
> 
> I've been in contact with the local XILINX sales office and their FAE
> agrees that v7.1 was too slow to be of any use, and v8.1 is so badly
> messed up that one can't really do much with it at all.  I've gone back
> to using v6.3.03, which seems, aside from long-standing issues that
> I've reported, some dating back as far as release 4.2 yet have never
> been addressed, such as errors in netlisting, etc.  The most annoying
> problems I've encountered have been associated with ECS, but there have
> been others, e.g. intermittent failure of the software to adhere to
> Windows conventions with respect to cut and paste, etc, (I could go on,
> but what's the use).
> 
> I've had over a dozen cases, probably closer to three dozen, open over
> the last year, and certainly quite a few more than that in years past.
> Only one or two have been resolved in any sense, the remainder having
> been escalated via CR's, but still remaining unresolved.  They're
> always scheduled for repair in "the next release" but that seldom
> happens.  I have observed that if I don't complain, nothing gets fixed.
>  Of course, if I do complain, there's no guarantee, but I have to go on
> record.
> 
> In 8.1, several major things that immediately impair progress have been
> introduced.  I've complained about that, but I haven't time to do
> XILINX' work for them.  They need to put a few dozen, or perhaps a few
> hundred, people who've used software to do useful work, and not just to
> create more software, on the task testing this software against written
> documents which provide specific criteria consistent with how it is
> supposed to work.
> 
> In release 5.0 (2002) I reported that ECS fails to change line-draw
> mode when the user selects "manual mode."  The workaround is still the
> only way of dealing with that, and the workaround is to leave line-draw
> mode and then re-enter it.  That sets manual mode.  Going back to
> autorouting mode requires the same set-exit-reenter sequence.
> Likewise, I complained in 2002 about the fact that bus taps, which
> won't always align with the line-draw grid.  That's still a problem.
> The workaround for that is to exit ECS and then re-enter, whereupon the
> bus-tap to line registration is resolved.  There are numerous others
> that work similarly.  ECS occasionally reports errors that it can't
> show on the display.  When one exits ECS and re-enters it, they're
> gone.  The auto-increment/decrement of signal names has, since v4.2,
> been randomly fouled up.  Sometimes, when you want it to increment, it
> decrements, and vice-versa.  It's not entirely predictable, but it goes
> the wrong way more often than not.  Since this is their "try it before
> you buy it" package, I've not bought XILINX software since v3.1 of
> Foundation, which wasn't a real value either.
> 
> Only today, I finally got a reply from the tech support people
> regarding the v8.1 sheet size management, which I reported on the
> weekend.  When you change sheet size, v8.1 doesn't fully recognize the
> change.  Consequently, as you place symbols, it repeatedly complains
> that the symbol is placed outside the sheet boundaries, which certainly
> isn't visible.  In order to get the display to reflect the sheet size,
> you have to exit ECS and then re-enter.
> 
> Of course, one could live with that one, but, combined with the fact
> that some genius decided it would be "cool" to make the various process
> menus moveable, which doesn't help anything, yet make them so small,
> vertically, relative to the entire window, in the default that you
> can't even read title, then hide the boundary, which, BTW, is not
> located in the conventional place, so you can't reasonably be expected
> to expand it, and then make the window overlap rather than border the
> companion menu to which you concurrently need access.   Now, I use a
> 1600x1200 display because I'm getting too old to use finer resolution,
> but I think I should be able to see what's in front of me when I'm at
> the default configuration.
> 
> Clearly, nobody even cursorily tested the ECS module when this tool
> suite was released.
> 
> There are problems with the associated ModelSim, as well, but I'm not
> going to describe 'em here, and I'm not going to mess with that
> complete mistake that XILINX released until I hear rumors that it's
> working, which may be a while.  In the meantime, I'll use the older
> stuff that seems to work O.K.
> 
> Bitching about XILINX software has done so little good that I don't
> even get any relief from the frustration of having no other option than
> to switch to Altera.  Their stuff isn't perfect either.  <sigh>
> 
> Richard
> 


Article: 96736
Subject: Re: Parallel NCO (DDS) in Spartan3 for clock synthesis - highest
From: Jim Granville <no.spam@designtools.co.nz>
Date: Fri, 10 Feb 2006 10:29:28 +1300
Links: << >>  << T >>  << A >>
PeterC wrote:

> Gurus,
> 
> I have built and tested a numerically-controlled oscillator (clock
> generator) using a simple phase accumulator (adder) and two registers.
> One register contains the tuning word (N), and the other is used in the
> feedback loop into the second input of the adder.
> 
> I take the MSB of the feedback register as my synthesised clock. I am
> generating sub 50kHz clock frequencies, by clocking the feedback
> register at 100 MHz. The accumulator is a 32 bit adder as  is the
> feedback register (of course). Works nicely on a board (my tuning word
> comes from a processor chip, and my spectrum analyzer tells the truth
> when I look at my MSB generated clock).
> 
> To reduce the jitter I would like to run two or more phase accumulators
> in parallel which are clock-enabled on every-other clock cycle (as per
> Ray Andraka's suggestion from the "how to speed up my accumulator" post
> by Moti in Dec 2004) and then switch between the MSBs of each
> accumulator using a MUX on the MSBs.

At your Sub 50KHz, what frequency step can you tolerate ?
You can trade off average precision for purity.

DDS gives a numerical frequency, whose average has many digits.. but
as you have found, it has a lot of phase jitter.
The alternative is a simple divide by N, ( for 100Mhz - 50KHz, N=2000,
so your next freq step (/2001) is just under 25Hz away.
For audio, that's probably tolerable ?

(You can think of the DDS as dithering between these two values)

At 1KHz, steps are much smaller.

More complex, is to use a DPLL, and create Fo = M/N, and you scale both
M and N. You will pick up the DPLL jitter as well, but that's usually 
much smaller than system clk times.

-jg


Article: 96737
Subject: Re: Async Processors
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 9 Feb 2006 13:50:07 -0800
Links: << >>  << T >>  << A >>
Jim Granville wrote:
> rickman wrote:
>
> > Jim Granville wrote:
> >
> >>Further to an earlier thread on ASYNC design, and cores, this in the
> >>news :
> >>http://www.eet.com/news/design/showArticle.jhtml?articleID=179101800
> >>
> >>  and a little more is here
> >>http://www.handshakesolutions.com/Products_Services/ARM996HS/Index.html
> >>
> >>  with some plots here:
> >>http://www.handshakesolutions.com/assets/downloadablefile/ARM996HS_leaflet_feb06-13004.pdf
> >>
> >>  They don't mention the Vcc of the compared 968E-S, but the Joules and
> >>EMC look compelling, as does the 'self tracking' aspect of Async.
> >>
> >>  They also have an Async 8051
> >>http://www.handshakesolutions.com/Products_Services/HT-80C51/Index.html
> >
> >
> > I seem to recall participating in a discussion of asynch processors a
> > while back and came to the conclusion that they had few advantages in
> > the real world.  The claim of improved speed is a red herring.  The
> > clock cycle of a processor is fixed by the longest delay path which is
> > at lowest voltage and highest temperature.  The same is true of the
> > async processor, but the place where you have to deal with the
> > variability is at the system level, not the clock cycle.  So at room
> > temperature you may find that the processor runs faster, but under
> > worst case conditions you still have to get X amount of computations
> > done in Y amount of time.
>
> Yes, but systems commonly spend a LOT of time waiting on external, or
> time, events.

Yes, and if power consumption is important the processor can stop or
even stop the clock.  That is often used when power consumption is
critical.  That's all the async processor does, it stops its own clock.


BTW, how does the async processor stop to wait for IO?  The ARM
processor doesn't have a "wait for IO" instruction.  So it has to set
an interrupt on a IO pin change or register bit change and then stop
the CPU, just like the clocked processor.  No free lunch here!


>    The two processors will likely be the same
> > speed or the async processor may even be slower.  With a clocked
> > processor, you can calculate exactly how fast each path will be and
> > margin is added to the clock cycle to deal with worst case wafer
> > processing.  The async processor has a data path and a handshake path
> > with the handshake being designed for a longer delay.  This delay delta
> > also has to have margin and likely more than the clocked processor to
> > account for two paths.
>
>   Why ? In the clocked case, you have to spec to cover Process spreads,
> and also Vcc and Temp. That's three spreads.
>   The Async design self tracks all three, and the margin is there by ratio.

Yes, the async processor will run faster when conditions are good, but
what can you do with those extra instruction cycles?  You still have to
design your application to execute M instructions in N amount of time
under WORST CASE conditions.  The extra speed is wasted unless, like I
said, you want to do some SETI calcs or something that does not need to
be done.  The async processor just moves the synchronization to the
system level where you sit and wait instead of at the gate level at
every clock cycle.


>    This may make the async processor slower in the
> > worst case conditions.
> >
> > Since your system timing must work under all cases, you can't really
> > use the extra computations that are available when not running under
> > worst case conditions, unless you can do SETI calculations or something
> > that is not required to get done.
> >
> > I can't say for sure that the async processor does not use less power
> > than a clocked processor, but I can't see why that would be true.
>
> You did look at their Joule plots ?

Yes, but there are too many unknowns to tell if they are comparing
apples to oranges.  Did the application calculate the fibonacci series,
or do IO with waits?  Did the clocked processor use clock gating to
disable unused sections or did every section run full tilt at all
times?  I have no idea how real the comparison is.  Considering how the
processor works I don't see where there should be a difference.  Dig
below the surface and consider how many gate outputs are toggling and
you will see the only real difference is in the clocking itself;
compare the clock tree to the handshake paths.


> > Both
> > are clocked.  The async processor is clocked locally and dedicates lots
> > of logic to generating and propagating the clock.
>
> Their gate count comparisons suggest this cost is not a great as one
> would first think.

But the gate count is higher in the async processor.


> > A clocked chip just has to distribute the clock.
>
> ... and that involves massive clock trees, and amps of clock driver
> spikes, in some devices....(not to mention electro migration issues...)

You can wave your hands and cry out "massive clock trees", but you
still have to distribute clocks everywhere in the async part, it is
just done differently with lots of logic in the clock path and they
call it a handshake.  Instead of trying to minimize the clock delay,
they lengthen it to exceed the logic delay.


> > The rest of the logic is the same between
> > the two.
> >
> > I suppose that the async processor does have an advantage in the area
> > of noise.
>
> yes.  [and probably makes some code-cracking much harder...]
>
> As SOC designs add more and more analog and even RF onto the
> > same die, this will become more important.  But if EMI with the outside
> > world is the consideration, there are techniques to spread the spectrum
> > of the clock that reduce the generated EMI.  This won't help on-chip
> > because each clock edge generates large transients which upset analog
> > signals.
> >
> > I can't comment on the data provided by the manufacturer.  I expect
> > that you can achieve similar results with very agressive clock
> > management.
>
> Perhaps, in the limiting case, yes - but you have two problems:
> a) That is a LOT of NEW system overhead, to manage all that agressive
> clock management...
> b) The Async core does this 'Clock management for free - it is part of
> the design.

It is "free" the same way in any design.  The clock management in a
clocked part would not be software, it would be in the hardware.

> I don't recall the name of the company, but I remember
> > recently reading about one that has cut CPU power significantly that
> > way.  I think they were building a processor to power a desktop
> > computer and got Pentium 4 processing speeds at just 25 Watts compared
> > to 80+ Watts for the Pentium 4.
>
>   Intel are now talking of Multiple/Split Vccs on a die, including
> some mention of magnetic layers, and inductors, but that is horizon
> stuff, not their current volume die.
>   I am sure they have an impressive road map, as that is one thing that
> swung Apple... :)

I found the article in Electronic Products, FEB 2006, "High-performance
64-bit processor promises tenfold cut in power", pp24-26.  It sounds
like a real hot rod with dual 2 GHz processors, dual high speed memory
interfaces, octal PCI express, gigabit Ethernet and lots of other
stuff.  5 to 13 Watts typical and 25 Watts max.

So you can do some amazing stuff with power without going to async
clocking.


> That may not convey well to the
> > embedded world where there is less paralellism.  So I am not a convert
> > to async processing as yet.
>
>   I'd like to see a more complete data sheet, and some real silicon, but
> the EMC plot of the HT80C51 running indentical code is certainly an eye
> opener. (if it is a true comparison).
>
>   It is nice to see (pico) Joules / Opcode quoted, and that is the right
> units to be thinking in.


Article: 96738
Subject: Re: Async Processors
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 9 Feb 2006 14:01:19 -0800
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com wrote:
> rickman wrote:
> >The two processors will likely be the same
> > speed or the async processor may even be slower.  With a clocked
> > processor, you can calculate exactly how fast each path will be and
> > margin is added to the clock cycle to deal with worst case wafer
> > processing.  The async processor has a data path and a handshake path
> > with the handshake being designed for a longer delay.  This delay delta
> > also has to have margin and likely more than the clocked processor to
> > account for two paths.  This may make the async processor slower in the
> > worst case conditions.
>
> There are a lot of different async technologies, not all suffer from
> this.
> Dual rail with an active ack do not rely on the handshake having a
> longer
> time to envelope the data path worst case. Phased Logic designs are
> one example.

Can you explain?  I don't see how you can async clock logic without
having a delay path that exceeds the worst path delay in the logic.
There is no way to tell when combinatorial logic has settled other than
to model the delay.

I found some links with Google, but I didn't gain much enlightenment
with the nickle tour.  What I did find seems to indicate that the
complexity goes way up since each signal is two signals of value and
timing combined called LEDR encoding.  I don't see how this is an
improvement.


> > Since your system timing must work under all cases, you can't really
> > use the extra computations that are available when not running under
> > worst case conditions, unless you can do SETI calculations or something
> > that is not required to get done.
>
> Using dual rail with ack, there is no worst case design consideration
> internal to the logic ... it's just functionally correct by design at
> any
> speed. So, if the chip is running fast, so does the logic, up until it
> must synchronize with the outside world.

That is the point.  Why run fast when you can't make use of the extra
speed?  Your app must be designed for the worst case speed and anything
faster is lost.


> > I can't say for sure that the async processor does not use less power
> > than a clocked processor, but I can't see why that would be true.  Both
> > are clocked.  The async processor is clocked locally and dedicates lots
> > of logic to generating and propagating the clock.  A clocked chip just
> > has to distribute the clock.  The rest of the logic is the same between
> > the two.
>
> for fine grained async, there is very little cascaded logic, and as
> such
> very little transitional glitching in comparision to relatively deep
> combinatorials that are clocked. This transitional glitching at clocks
> consumes more power than just the best case behaviorial of clean
> transitions of all signals at clock edges and no prop or routing
> delays.
>
> for course grained async, the advantage obviously goes away.

I think you are talking about a pretty small effect compared to the
overall power consumption.


> > I suppose that the async processor does have an advantage in the area
> > of noise.  As SOC designs add more and more analog and even RF onto the
> > same die, this will become more important.  But if EMI with the outside
> > world is the consideration, there are techniques to spread the spectrum
> > of the clock that reduce the generated EMI.  This won't help on-chip
> > because each clock edge generates large transients which upset analog
> > signals.
>
> By design clocked creates a distribution of additive current spikes
> following clock edges, even if spread spectrum. This simply is less, if
> any, of a problem using async designs.  Async has a much better chance
> of creating larger DC component in the power demand by time spreading
> transistions so that the on chip capacitance can filter the smaller
> transition spikes, instead of high the AC components with a lot of
> frequency components that you get with clocked designs.
>
> In the whole discussion about the current at the center of the ball
> array and DC currents, this was the point the was missed. If you slow
> the clock down enough, the current will go from zero, to a peak shortly
> after a clock, and back to zero, with any clocked design. To get the
> current profile to maintain a significant DC level for dynamic
> currents, requires carefully balancing multiple clock domains and using
> deeper than one level of LUTs with long routing to time spread the
> clock currents.  Very Very regular designs, with short routing and a
> single lut depth, will generate a dynamic current spike 1-3 lut delays
> from the clock transition. On small chips which do not have a huge
> clock net skew, this will mean most of the dynamic current will
> occuring in a two or three lut delay window following clock
> transitions. Larger designs with a high distribution of multiple levels
> of logic and routing delays flatten this distribution out.
>
> Dual rail with ack designs just completely avoid this problem.

Care to explain how Dual rail with ack operates?


Article: 96739
Subject: Re: Parallel NCO (DDS) in Spartan3 for clock synthesis - highest possible speed?
From: "PeterC" <peter@geckoaudio.com>
Date: 9 Feb 2006 14:12:42 -0800
Links: << >>  << T >>  << A >>
Thank you for your detailed system description Raymond - unfortunately
cost is critical, and I simply don't have the option of using any
external components - hence the desire to synthesize useable audio
clocks completely in the FPGA, ideally from a cheap crystal (or the
crystal already used by the processor chip, as I'm doing now)..

PeterC.


Article: 96740
Subject: Re: question for the EDK users out there...
From: John Williams <jwilliams@itee.uq.edu.au>
Date: Fri, 10 Feb 2006 08:54:38 +1000
Links: << >>  << T >>  << A >>
Peter Ryser wrote:

>> Unfortunately it doesn't work in V4 ES (early silicon) parts due to a
>> silicon bug.  I assume it was fixed for production silicon.
> 
> 
> That was only a problem for very very early LX silicon and has been
> fixed for quite some time now.

We bought 20 ML401 boards in early 2005, and I believe all of them are
affected :(

John

Article: 96741
Subject: Re: Software reset for the MicroBlaze
From: John Williams <jwilliams@itee.uq.edu.au>
Date: Fri, 10 Feb 2006 08:55:55 +1000
Links: << >>  << T >>  << A >>
Simon Peacock wrote:

> That would be a hardware reset .. not software :-).... but it depends on
> what you call a hard reset

OK, I'll give you that :) my reading of the question was "how do I
initiate a reset from within software".

In Linux land we call it "shutdown -r now"

John

Article: 96742
Subject: Re: Async Processors
From: fpga_toys@yahoo.com
Date: 9 Feb 2006 14:56:09 -0800
Links: << >>  << T >>  << A >>

rickman wrote:
> Can you explain?  I don't see how you can async clock logic without
> having a delay path that exceeds the worst path delay in the logic.
> There is no way to tell when combinatorial logic has settled other than
> to model the delay.

Worst case sync design requires that the clock period be slower than
the
longest worst case combinatorial path ... ALWAYS ... even when the
device is operating under best case conditions. Devices with best case
fab operating under best case environmentals, are forced to run just as
slow as worst case fab devices under worst case environmental.

The tradeoff with async is to accept that under worst case fab and
worst
case environmental, that the design will run a little slower because of
the
ack path.

However, under typical conditions, and certainly under best case fab
and
best case environmentals, the expecation is that the ack path delay
costs
are a minor portion of the improvements gained by using the ack path.
If
the device has very small deviations in performance from best case to
worst case, and the ack costs are high, then there clearly isn't any
gain
to be had.  Other devices however, do offer this gain for certain
designs.

Likewise, many designs might be clock constrained by an exception path
that is rarely exercised, but the worst case delay for that rare path
will
constrain the clock rate for the entire design. With async, that
problem
goes away, as the design can operate with timings for the normal path
without worrying about the slowest worst case paths.

> I think you are talking about a pretty small effect compared to the
> overall power consumption.

Depends greatly on the design and logic depth. For your design it might
not make a difference as you suggest. For a multiplier it can be
significant,
as every transistion, including the glitches cost the same dynamic
power.


Article: 96743
Subject: Re: Async Processors
From: Jim Granville <no.spam@designtools.co.nz>
Date: Fri, 10 Feb 2006 12:57:16 +1300
Links: << >>  << T >>  << A >>
rickman wrote:
> BTW, how does the async processor stop to wait for IO?  The ARM
> processor doesn't have a "wait for IO" instruction.  

Yes, that has to be one of the keys.
Done properly, JNB  Flag,$ should spin only that opcode's logic, and
activate only the small cache doing it.

> So it has to set
> an interrupt on a IO pin change or register bit change and then stop
> the CPU, just like the clocked processor.  No free lunch here!

That's the coarse-grain way, the implementation above can drop to
tiny power anywhere.


> Yes, the async processor will run faster when conditions are good, but
> what can you do with those extra instruction cycles?  

Nothing, the point is you save energy, by finishing earlier.

>>Their gate count comparisons suggest this cost is not a great as one
>>would first think.
> 
> 
> But the gate count is higher in the async processor.

Not in the 8051 example.
In the ARM case, it is 89:88, pretty much even.

The thing to do now, is wait for some real devices,
and better data.

-jg


Article: 96744
Subject: Re: Async Processors
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 9 Feb 2006 15:58:10 -0800
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com wrote:
> rickman wrote:
> > Can you explain?  I don't see how you can async clock logic without
> > having a delay path that exceeds the worst path delay in the logic.
> > There is no way to tell when combinatorial logic has settled other than
> > to model the delay.
>
> Worst case sync design requires that the clock period be slower than
> the
> longest worst case combinatorial path ... ALWAYS ... even when the
> device is operating under best case conditions. Devices with best case
> fab operating under best case environmentals, are forced to run just as
> slow as worst case fab devices under worst case environmental.
>
> The tradeoff with async is to accept that under worst case fab and
> worst
> case environmental, that the design will run a little slower because of
> the
> ack path.
>
> However, under typical conditions, and certainly under best case fab
> and
> best case environmentals, the expecation is that the ack path delay
> costs
> are a minor portion of the improvements gained by using the ack path.
> If
> the device has very small deviations in performance from best case to
> worst case, and the ack costs are high, then there clearly isn't any
> gain
> to be had.  Other devices however, do offer this gain for certain
> designs.
>
> Likewise, many designs might be clock constrained by an exception path
> that is rarely exercised, but the worst case delay for that rare path
> will
> constrain the clock rate for the entire design. With async, that
> problem
> goes away, as the design can operate with timings for the normal path
> without worrying about the slowest worst case paths.

You have ignored the real issue.  The issue is not whether the async
design can run faster under typical conditions; we all know it can.
The issue is how do you make use of that faster speed?  The system
design has to work in the worst case conditions, so you can only use
the available performance under worse case conditions.

You can do the same thing with a clocked design.  Measure the
temperature and run the clock faster when the temperature is cooler.
It just is not worth the effort since you can't do anything useful with
the extra instructions.


> > I think you are talking about a pretty small effect compared to the
> > overall power consumption.
>
> Depends greatly on the design and logic depth. For your design it might
> not make a difference as you suggest. For a multiplier it can be
> significant,
> as every transistion, including the glitches cost the same dynamic
> power.

The glitching happens in any design.  Inputs change and create changes
on the gate outputs which feed other gates, etc until you reach the
outputs.  But the different paths will have different delays and the
outputs as well as the signals in the path can jump multiple times
before they settle.  The micro-glitching you are talking about will
likely cause little additional glitching relative to what already
happens.  Of course, YMMV.


Article: 96745
Subject: Xilinx ISERDES Q1 issues
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Thu, 9 Feb 2006 16:05:45 -0800
Links: << >>  << T >>  << A >>
I am getting strange results in the Xilinx ISERDES Q1 output.
Specifically it will sometimes be almost random values. And the
value that appears at Q1 will sometimes not propagate to Q5 in
a 4 bit shifter as it should.  Does the Q1 have some sort of special
clocking?

This is the second time I've seen this, the first being in hardware and
the workaround was to use the higher q outputs, since the timing of
the framing signal wasn't on the data boundry anyway.

This time I have it simulated on ISE 7.1. I will attached code to
in a second message.

Has anyone seen this kind of problem?

Brad Smallridge
Ai Vision



Article: 96746
Subject: Re: Async Processors
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 9 Feb 2006 16:08:35 -0800
Links: << >>  << T >>  << A >>
Jim Granville wrote:
> rickman wrote:
> > BTW, how does the async processor stop to wait for IO?  The ARM
> > processor doesn't have a "wait for IO" instruction.
>
> Yes, that has to be one of the keys.
> Done properly, JNB  Flag,$ should spin only that opcode's logic, and
> activate only the small cache doing it.

No, it should not spin since that still requires clocking of the fetch,
decode, execute logic.  You can do better by just stopping until you
get an interrupt.

> > So it has to set
> > an interrupt on a IO pin change or register bit change and then stop
> > the CPU, just like the clocked processor.  No free lunch here!
>
> That's the coarse-grain way, the implementation above can drop to
> tiny power anywhere.

I disagree.  Stopping the CPU can drop the power to static levels.  How
can you get lower than that?

> > Yes, the async processor will run faster when conditions are good, but
> > what can you do with those extra instruction cycles?
>
> Nothing, the point is you save energy, by finishing earlier.

How did you save energy?  You are thinking of a clocked design where
the energy is a function of time because the cycles are fixed in
duration.  In the async design energy is not directly a function of
time but rather a fuction of the processing.  In this case the
processing takes the same amount of energy, it just gets done faster.
Then you wait until the next external trigger that you need to
synchronize to.  No processing or energy gain, just a longer wait time.



> >>Their gate count comparisons suggest this cost is not a great as one
> >>would first think.
> >
> >
> > But the gate count is higher in the async processor.
>
> Not in the 8051 example.
> In the ARM case, it is 89:88, pretty much even.
>
> The thing to do now, is wait for some real devices,
> and better data.

In the real world two equivalent designs will have to take more gates
for the async design.  You need all the same gates as in the clocked
design, you subtract the clock tree and add back in the async clocking.
 I expect this would be nearly a wash in any design.  The only way the
async design can be smaller is if they make other changes.

I say that clock management will be easier to use and implement and
give the same results as async clocking.  In a large sense, the "async"
clocking is just a way to gate the clock to each logic block on a cycle
by cycle basis.  It is really just a matter of what you claim to get
from this.  Speed is not one of these gains.


Article: 96747
Subject: Re: Xilinx ISERDES Q1 issues
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Thu, 9 Feb 2006 16:19:40 -0800
Links: << >>  << T >>  << A >>

-- The code

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;

library UNISIM;
use UNISIM.VComponents.all;

entity top is
 port(
   sys_clk_in : in std_logic;
   sys_rst_in : in std_logic;
   data_in    : in std_logic_vector(3 downto 0);
     dq         : inout std_logic;
   q1         : out std_logic;
   q2         : out std_logic;
   q3         : out std_logic;
   q4         : out std_logic;
   q5         : out std_logic;
   q6         : out std_logic   );
 end top;

architecture Behavioral of top is

 component sys_dcm
 port(
   clkin_in         : in std_logic;
   rst_in           : in std_logic;
   clkin_ibufg_out  : out std_logic;
   clk0_out         : out std_logic;
   clk2x_out        : out std_logic;
   clk2x180_out     : out std_logic;
   clk90_out        : out std_logic;
   clk180_out       : out std_logic;
   clk270_out       : out std_logic;
   locked_out       : out std_logic );
 end component;

 signal sys_clk              : std_logic;
 signal sys_clk180           : std_logic;
 signal sys_clkdiv           : std_logic;
 signal sys_clkdiv90         : std_logic;
 signal sys_clkdiv180        : std_logic;
 signal sys_clkdiv270        : std_logic;
 signal sys_lock             : std_logic;
 signal sys_lock_delayed     : std_logic;
 signal sys_rst_in_not       : std_logic;
 signal sys_reset            : std_logic;

 signal  iobuf_i      : std_logic;  -- data read
 signal  iobuf_t      : std_logic;  -- data write enable
 signal  iobuf_o      : std_logic;  -- data write

 signal idelay_rdy    : std_logic;

begin

 sys_rst_in_not <= not sys_rst_in;

 sys_dcm_inst : sys_dcm
 port map(
   clkin_in         => sys_clk_in,
   rst_in           => sys_rst_in_not,
   clkin_ibufg_out  => open,
   clk0_out         => sys_clkdiv,     -- 100 MHz
   clk2x_out        => sys_clk,        -- 200 MHz
   clk2x180_out     => sys_clk180,     -- 200 MHz
   clk90_out        => sys_clkdiv90,   -- 100 MHz
   clk180_out       => sys_clkdiv180,  -- 100 MHz
   clk270_out       => sys_clkdiv270,  -- 100 MHz
   locked_out       => sys_lock );

 sys_lock_delay_SRL16 : SRL16
 generic map (
  INIT => X"0000")
 port map (
  Q   => sys_lock_delayed,
  A0  => '1', -- 16 clock delays
  A1  => '1',
  A2  => '1',
  A3  => '1',
  CLK => sys_clk,
  D   => sys_lock );

 sys_reset <= not( sys_lock and sys_lock_delayed );

 idelayctrl_inst : IDELAYCTRL
 port map (
   RDY    => idelay_rdy, -- 1-bit output indicates validity of the REFCLK
   REFCLK => sys_clk,    -- 1-bit reference clock input
   RST    => sys_reset   -- 1-bit reset input
 );

   iserdes_inst : ISERDES
   generic map (
      BITSLIP_ENABLE => FALSE,   -- TRUE FALSE
      DATA_RATE      => "DDR",   -- DDR SDR
      DATA_WIDTH     =>  4,      -- DDR 4,6,8,10  SDR 2,3,4,5,6,7,8
      INIT_Q1        => '0',
      INIT_Q2        => '0',
      INIT_Q3        => '0',
      INIT_Q4        => '0',
      INTERFACE_TYPE => "MEMORY",  -- model - "MEMORY" or "NETWORKING"
      IOBDELAY       => "IFD",    -- delay chain "NONE","IBUF","IFD","BOTH"
      IOBDELAY_TYPE  => "FIXED", -- tap delay "DEFAULT", "FIXED","VARIABLE"
      IOBDELAY_VALUE =>  1,        -- initial tap delay 0 to 63
      NUM_CE         =>  1,        -- clock enables 1,2
      SERDES_MODE    => "MASTER",  -- "MASTER" or "SLAVE"
      SRVAL_Q1       => '0',
      SRVAL_Q2       => '0',
      SRVAL_Q3       => '0',
      SRVAL_Q4       => '0')
   port map (
      O         => open,
      Q1        => q1,
      Q2        => q2,
      Q3        => q3,
      Q4        => q4,
      Q5        => q5,
      Q6        => q6,
      SHIFTOUT1 => open,
      SHIFTOUT2 => open,
      BITSLIP   => '0',
      CE1       => '1',
      CE2       => '1',
      CLK       => sys_clk,
      CLKDIV    => sys_clkdiv90,
      D         => iobuf_o,
      DLYCE     => '0',
      DLYINC    => '0',
      DLYRST    => '0',
      OCLK      => sys_clk,
      REV       => '0',
      SHIFTIN1  => '0',
      SHIFTIN2  => '0',
      SR        => sys_reset
 );

   oserdes_inst : OSERDES
   generic map (
      DATA_RATE_OQ => "DDR",    -- Specify data rate to "DDR" or "SDR"
      DATA_RATE_TQ => "DDR",    -- Specify data rate to "DDR", "SDR", or 
"BUF"
      DATA_WIDTH   => 4,        -- DDR 4,6,8,10 SDR 2,3,4,5,6,7, or 8
      INIT_OQ      => '0',      -- INIT for Q1 register
      INIT_TQ      => '0',      -- INIT for Q2 register
      SERDES_MODE  => "MASTER", -- Set SERDES mode to "MASTER" or "SLAVE"
      SRVAL_OQ     => '0',      -- Define Q1 output value upon SR assertion
      SRVAL_TQ     => '0',      -- Define Q2 output value upon SR assertion
      TRISTATE_WIDTH => 4 )     -- Specify parallel to serial converter 
width ??
   port map (
      OQ        => iobuf_i,         -- 1-bit output
      SHIFTOUT1 => open,            -- 1-bit output
      SHIFTOUT2 => open,            -- 1-bit output
      TQ        => iobuf_t,         -- 1-bit output
      CLK       => sys_clk,         -- 1-bit input
      CLKDIV    => sys_clkdiv,      -- 1-bit input
      D1        => data_in(0),      -- 1-bit input
      D2        => data_in(1),      -- 1-bit input
      D3        => data_in(2),      -- 1-bit input
      D4        => data_in(3),      -- 1-bit input
      D5        => '0',             -- 1-bit input
      D6        => '0',             -- 1-bit input
      OCE       => '1',             -- 1-bit input
      REV       => '0',             -- 1-bit input
      SHIFTIN1  => '0',             -- 1-bit input
      SHIFTIN2  => '0',             -- 1-bit input
      SR        => sys_reset,       -- 1-bit input
      T1        => '0',             -- 1-bit input
      T2        => '0',             -- 1-bit input
      T3        => '0',             -- 1-bit input
      T4        => '0',             -- 1-bit input
      TCE       => '1'              -- 1-bit input
   );

 iobuf_inst : IOBUF
 port map (
   I  =>  iobuf_i,  -- data going out of FPGA
   T  =>  iobuf_t,  -- data write enable
   O  =>  iobuf_o,  -- data coming into FPGA
   IO =>  dq        -- dq inout port
 );

end Behavioral;



Article: 96748
Subject: Re: Xilinx ISERDES Q1 issues
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Thu, 9 Feb 2006 16:21:32 -0800
Links: << >>  << T >>  << A >>

-- The SIMULATION

--   ____  ____
--  /   /\/   /
-- /___/  \  /    Vendor: Xilinx
-- \   \   \/     Version : 7.1i
--  \   \         Application : ISE Foundation
--  /   /         Filename : waveform.vhw
-- /___/   /\     Timestamp : Thu Feb 09 15:12:03 2006
-- \   \  /  \
--  \___\/\___\
--

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;
library UNISIM;
use UNISIM.VComponents.all;
USE IEEE.STD_LOGIC_TEXTIO.ALL;
USE STD.TEXTIO.ALL;

ENTITY waveform IS
END waveform;

ARCHITECTURE testbench_arch OF waveform IS
    COMPONENT top
        PORT (
            sys_clk_in : In std_logic;
            sys_rst_in : In std_logic;
            data_in : In std_logic_vector (3 DownTo 0);
            dq : InOut std_logic;
            q1 : Out std_logic;
            q2 : Out std_logic;
            q3 : Out std_logic;
            q4 : Out std_logic;
            q5 : Out std_logic;
            q6 : Out std_logic
        );
    END COMPONENT;

    SIGNAL sys_clk_in : std_logic := '0';
    SIGNAL sys_rst_in : std_logic := '0';
    SIGNAL data_in : std_logic_vector (3 DownTo 0) := "0000";
    SIGNAL dq : std_logic := '0';
    SIGNAL q1 : std_logic := '0';
    SIGNAL q2 : std_logic := '0';
    SIGNAL q3 : std_logic := '0';
    SIGNAL q4 : std_logic := '0';
    SIGNAL q5 : std_logic := '0';
    SIGNAL q6 : std_logic := '0';

    SHARED VARIABLE TX_ERROR : INTEGER := 0;
    SHARED VARIABLE TX_OUT : LINE;

    constant PERIOD : time := 10 ns;
    constant DUTY_CYCLE : real := 0.5;
    constant OFFSET : time := 5 ns;

    BEGIN
        UUT : top
        PORT MAP (
            sys_clk_in => sys_clk_in,
            sys_rst_in => sys_rst_in,
            data_in => data_in,
            dq => dq,
            q1 => q1,
            q2 => q2,
            q3 => q3,
            q4 => q4,
            q5 => q5,
            q6 => q6
        );

        PROCESS    -- clock process for sys_clk_in
        BEGIN
            WAIT for OFFSET;
            CLOCK_LOOP : LOOP
                sys_clk_in <= '0';
                WAIT FOR (PERIOD - (PERIOD * DUTY_CYCLE));
                sys_clk_in <= '1';
                WAIT FOR (PERIOD * DUTY_CYCLE);
            END LOOP CLOCK_LOOP;
        END PROCESS;

        PROCESS
            PROCEDURE CHECK_q1(
                next_q1 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q1 /= next_q1) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q1="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q1);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q1);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            PROCEDURE CHECK_q2(
                next_q2 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q2 /= next_q2) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q2="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q2);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q2);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            PROCEDURE CHECK_q3(
                next_q3 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q3 /= next_q3) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q3="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q3);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q3);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            PROCEDURE CHECK_q4(
                next_q4 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q4 /= next_q4) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q4="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q4);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q4);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            PROCEDURE CHECK_q5(
                next_q5 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q5 /= next_q5) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q5="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q5);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q5);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            PROCEDURE CHECK_q6(
                next_q6 : std_logic;
                TX_TIME : INTEGER
            ) IS
                VARIABLE TX_STR : String(1 to 4096);
                VARIABLE TX_LOC : LINE;
                BEGIN
                IF (q6 /= next_q6) THEN
                    STD.TEXTIO.write(TX_LOC, string'("Error at time="));
                    STD.TEXTIO.write(TX_LOC, TX_TIME);
                    STD.TEXTIO.write(TX_LOC, string'("ns q6="));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, q6);
                    STD.TEXTIO.write(TX_LOC, string'(", Expected = "));
                    IEEE.STD_LOGIC_TEXTIO.write(TX_LOC, next_q6);
                    STD.TEXTIO.write(TX_LOC, string'(" "));
                    TX_STR(TX_LOC.all'range) := TX_LOC.all;
                    STD.TEXTIO.Deallocate(TX_LOC);
                    ASSERT (FALSE) REPORT TX_STR SEVERITY ERROR;
                    TX_ERROR := TX_ERROR + 1;
                END IF;
            END;
            BEGIN
                -- -------------  Current Time:  109ns
                WAIT FOR 109 ns;
                sys_rst_in <= '1';
                -- -------------------------------------
                -- -------------  Current Time:  499ns
                WAIT FOR 390 ns;
                data_in <= "1111";
                -- -------------------------------------
                -- -------------  Current Time:  509ns
                WAIT FOR 10 ns;
                data_in <= "0000";
                -- -------------------------------------
                -- -------------  Current Time:  539ns
                WAIT FOR 30 ns;
                data_in <= "0110";
                -- -------------------------------------
                -- -------------  Current Time:  549ns
                WAIT FOR 10 ns;
                data_in <= "0000";
                -- -------------------------------------
                -- -------------  Current Time:  579ns
                WAIT FOR 30 ns;
                data_in <= "1000";
                -- -------------------------------------
                -- -------------  Current Time:  589ns
                WAIT FOR 10 ns;
                data_in <= "0000";
                -- -------------------------------------
                -- -------------  Current Time:  619ns
                WAIT FOR 30 ns;
                data_in <= "0001";
                -- -------------------------------------
                -- -------------  Current Time:  629ns
                WAIT FOR 10 ns;
                data_in <= "0000";
                -- -------------------------------------
                WAIT FOR 381 ns;

                IF (TX_ERROR = 0) THEN
                    STD.TEXTIO.write(TX_OUT, string'("No errors or 
warnings"));
                    ASSERT (FALSE) REPORT
                      "Simulation successful (not a failure).  No problems 
detected."
                      SEVERITY FAILURE;
                ELSE
                    STD.TEXTIO.write(TX_OUT, TX_ERROR);
                    STD.TEXTIO.write(TX_OUT,
                        string'(" errors found in simulation"));
                    ASSERT (FALSE) REPORT "Errors found during simulation"
                         SEVERITY FAILURE;
                END IF;
            END PROCESS;

    END testbench_arch;




Article: 96749
Subject: Re: Parallel NCO (DDS) in Spartan3 for clock synthesis - highest possible speed?
From: "PeterC" <peter@geckoaudio.com>
Date: 9 Feb 2006 16:53:24 -0800
Links: << >>  << T >>  << A >>
Symon,

Yes, I need the MSB out of the FPGA, to drive an audio DAC. It's value
only really changes at 50kHz or so, but to reduce the jitter associated
with this low frequency transition, the clock that drives it out needs
to be as fast as possible (obviously). 840 Mbps would give 1.2 ns of
jitter which would be more than good enough. The problem is that the
same NCO must generate an (approx) 12 MHz and 24 MHz signal - a few ns
jitter on these is unacceptable. I will look at the FDDRCPE in the IOBs
- great hint and much appreciated.

I'm considering introducing 4 bits of dither, using a four 30-bit LFSR
(linear feedback shift registers) which would give a nice and long (in
terms of repeat cycles) pseudo-random 4-bit word sequence, to spread
out my side-bands (I can live with the raised noise floor).

Cheers,
Peter C.




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search