Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 133275

Article: 133275
Subject: Xilinx SecureIP simulation and third-party simulators?
From: "SynopsysFPGAexpress" <fpgas@sss.com>
Date: Mon, 23 Jun 2008 06:46:16 -0700
Links: << >>  << T >>  << A >>
Starting with ISE 10.1, has begun migrating some hard-IP simulation models
from Smartmodel to "SecureIP."  For now, the SecureIP blocks can only
be simulated in 1 simulator: Modelsim 6.3c (or later)

" AR #30975 - 10.1 SecureIP libraries - Does NCSIM and VCS support Secure IP 
flow?"
http://www.xilinx.com/support/answers/30975.htm
Answer: SecureIP in NCSIM and VCS will be supported starting in ISE 11.1

Ok, so I guess the question is, when is ISE 11.1 planned for release?
Also, will the Aldec simulators (Active-HDL, Riviera) be supported, too? 



Article: 133276
Subject: Re: which commercial HDL-Simulator for FPGA?
From: "SynopsysFPGAexpress" <fpgas@sss.com>
Date: Mon, 23 Jun 2008 06:50:27 -0700
Links: << >>  << T >>  << A >>
"Petter Gustad" <newsmailcomp6@gustad.com> wrote in message 
news:87bq1snxfw.fsf@pangea.home.gustad.com...
> Kim Enkovaara <kim.enkovaara@iki.fi> writes:
>
>> Joseph H Allen wrote:
>>
>>> So in either of these, you typically simulate and have all signals 
>>> dumped to
>>> a huge output file (using $dumpvars(0); $dumpon; for vcs or 
>>> $shm_open(...);
>>> $shm_probe(..); for ncsim).  Then you can explore the design hierarchy 
>>> and
>>> choose which signals to view in vcs -RPP or simvision.  The same is true 
>>> for
>>> even icarus verilog with gtkwave.
>>
>> In my mind this might work for small designs, but the huge amount of
>> signal logging slows down the simulation. I usually like to log just
>
> I've used this methods for many years for large ASIC designs. It slows
> down the simulation, but I find this much more effective than running
> the simulation again. Also it's more cost effective to release the
> expensive simulation license and use the cheaper waveform viewer for
> debugging. You can even run the simulations during the night and have
> the VPD (TRN, SST or whatever you prefer) files waiting for you the
> next morning.

For e/Specman and Systemverilog-TB debugging, the Cadence/NCsim
doesn't log dynamic-objects to the TRN/SST file.  So you pretty much
have to do most debugging interactively (if you want to see Systemverilog
objects/queues/dynamic-arrays, etc.), with the full license checkout of
the simulator.

I'm not sure how that compares to Mentor Questasim or Synopsys VCS. 



Article: 133277
Subject: Xilinx and RAM/ROM monitoring
From: XSterna <XSterna@gmail.com>
Date: Mon, 23 Jun 2008 07:12:50 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi,

I was wondering if a tool exists to monitor the content of a RAM, ROM
connected to a Xilinx FPGA.

I would like to be able to control the content of those memories like
a debugging tool for microcontroller for example.

Does anybody know if this type of "debug" option is available for FPGA
developpment ?

Xavier

Article: 133278
Subject: Re: virtex-5: can't use DCM (too low input frequency)
From: John_H <newsgroup@johnhandwork.com>
Date: Mon, 23 Jun 2008 08:40:21 -0700 (PDT)
Links: << >>  << T >>  << A >>
techG wrote:
>
> Unfortunately, I can't use 2xClockOutput in DFS Low Frequency Mode :(
> Thank you all for the help, I'll try both ways:
> 1) using a DCM and dividing the 4x output by two (can I use another
> DCM for this purpose?)
> 2) using IDDR flip-flops
>
> Giulio

If you can't use the CLKx2 because the output frequency is too low for
the DCM, you wouldn't be able to use a CLKx4 output to drive another
DCM in divide-by-2 mode because - surprise - the output frequency of
the 2nd DCM would be too slow.

I second the use of clock enables rather than DFF based divider and
the XOR phase control method:

If you toggle one flop with 6MHz and reregister that flop at the 24MHz
clock, the XOR of those two registers will always be asserted the
clock phase after the 6MHz edge.  Either use that as a "data is valid"
clock enable for a downstream-only system or use that signal to reload
a 2-bit counter and decode the timeslot *before* the 6MHz edge for the
clock enable if you need the I/O to be aligned to the 6MHz clock.

- John_H

Article: 133279
Subject: Re: FPGA based database searching
From: Mike Treseler <mike_treseler@comcast.net>
Date: Mon, 23 Jun 2008 11:04:49 -0700
Links: << >>  << T >>  << A >>
Norman Bollmann wrote:

> I've got a software implementation in 
> ANSI-C for a complex database searching. The database is a proprietary 
> format where I am saving data, which has to be given as a result, depending 
> on the input data. Problem is, the software implementation is far to slow.

Get a faster server, load linux.
FPGA's are OK for data filtering or statistics.
If you need "complex database searching" you
need a computer.

       -- Mike Treseler

Article: 133280
Subject: Linked Group for FPGAs & CPLDs
From: "vikashrungta@gmail.com" <vikashrungta@gmail.com>
Date: Mon, 23 Jun 2008 11:46:22 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi,

There is a new FPGA Linked in Group.

Joining will allow you to find and contact other FPGA, CPLD members on
LinkedIn. The goal of this group is to help members:

-> Reach other members of FPGA & CPLD community
-> Accelerate careers/business through referrals from FPGA Group
members
-> Know more than a name =96 view rich professional profiles from fellow
FPGA Group members
Here=92s the link to join:

http://www.linkedin.com/e/gis/54049/5B3F2217B20B

Hope to see you in the group,

=97  Vikram


Article: 133281
Subject: Re: Image Sensor Interface.
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 23 Jun 2008 10:59:05 -0800
Links: << >>  << T >>  << A >>
MikeWhy wrote:
(snip)

> Nyquist relates to sinusoids and periodicity in the signal. The sampling 
> period as it relates to Nyquist with your image sensor is the frame 
> rate, not the pixel clock/ADC sample rate. The two are not related in a 
> meaningful way. Fuhget about it.

Yes, Nyquist is completely unrelated to the signal coming out
of an image sensor, but it is important in what goes in.

Specifically, the image sensor samples an analog (image) in
two dimensions, and, for the result to be correct the image itself
must not have spatial frequencies at the sensor surface higher
than half the pixel spacing.  Sometimes one trusts the lens to
do that, others an optical low pass filter is used.

-- glen


Article: 133282
Subject: Re: Image Sensor Interface.
From: ertw <gill81@hotmail.com>
Date: Mon, 23 Jun 2008 13:30:16 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Jun 23, 2:59=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> MikeWhy wrote:
>
> (snip)
>
> > Nyquist relates to sinusoids and periodicity in the signal. The samplin=
g
> > period as it relates to Nyquist with your image sensor is the frame
> > rate, not the pixel clock/ADC sample rate. The two are not related in a
> > meaningful way. Fuhget about it.
>
> Yes, Nyquist is completely unrelated to the signal coming out
> of an image sensor, but it is important in what goes in.
>
> Specifically, the image sensor samples an analog (image) in
> two dimensions, and, for the result to be correct the image itself
> must not have spatial frequencies at the sensor surface higher
> than half the pixel spacing. =A0Sometimes one trusts the lens to
> do that, others an optical low pass filter is used.
>
> -- glen

Guys, Thanks a lot for the help. Jonathan your explanation was
great ...

Answers to the questions you asked -

- Its a monochrome sensor
- I do get explicit frame and line signals from the sensor
- Sensor does not have any clock generating circuitary (I have to
provide the clock, or pixel clock to the sensor, not sure if I was
clear about that in the previous post).

I have a few more questions regarding data storage and processing (I
think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to
the final stage analog signal (thats where I am planing to read it
using an ADC).

The output is actually 4 differential signals (one for each column)
meaning I will need four ADCs (all four video outputs signals come out
simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I
cant go over 133 Meg transfers per second. Since I am reading 4
channels @ 40 MHz that works out to be 160 Mbits per second so not
possible to transfer the data on fly over the bus (unless I am
misunderstanding something). Is there a way to transfer data on the
fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if
I have to store and then use a slower clock to transfer the data out).

Article: 133283
Subject: Re: Image Sensor Interface.
From: ertw <gill81@hotmail.com>
Date: Mon, 23 Jun 2008 13:34:56 -0700 (PDT)
Links: << >>  << T >>  << A >>
Guys, Thanks a lot for the help. Jonathan your explanation was
great ...

Answers to the questions you asked -

- Its a monochrome sensor
- I do get explicit frame and line signals from the sensor
- Sensor does not have any clock generating circuitary (I have to
provide the clock, or pixel clock to the sensor, not sure if I was
clear about that in the previous post).

I have a few more questions regarding data storage and processing (I
think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to
the final stage analog signal (thats where I am planing to read it
using an ADC).

The output is actually 4 differential signals (one for each column)
meaning I will need four ADCs (all four video outputs signals come out
simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I
cant go over 133 Meg transfers per second. Since I am reading 4
channels @ 40 MHz that works out to be 160 Mbits per second so not
possible to transfer the data on fly over the bus (unless I am
misunderstanding something). Is there a way to transfer data on the
fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if
I have to store and then use a slower clock to transfer the data out).

Article: 133284
Subject: Re: FPGA based database searching
From: Ben Jackson <ben@ben.com>
Date: Mon, 23 Jun 2008 20:40:40 GMT
Links: << >>  << T >>  << A >>
On 2008-06-23, Norman Bollmann <wirdnichtgelesen@gmx.net> wrote:
> Target is a database searching of 
> 262144 elements with 16 bit each in maximum 220 ms.

There are only 65536 possible values of your 16 bit value.  What is the
output of your algorithm?  Found y/n?  Then use a lookup table!  I doubt
an FPGA is the right answer to your problem.  On a modern CPU you've got
thousands of cycles for each element -- assuming you even have to consider
all of them for any given key.

-- 
Ben Jackson AD7GD
<ben@ben.com>
http://www.ben.com/

Article: 133285
Subject: Re: Image Sensor Interface.
From: ertw <gill81@hotmail.com>
Date: Mon, 23 Jun 2008 13:44:45 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Jun 22, 10:43=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> On Sun, 22 Jun 2008 07:01:10 -0700 (PDT), ertw <gil...@hotmail.com>
> wrote:
>
> >Hi, I am planning to read an image sensor using an FPGA but I am a
> >little confused about a bunch of things. Hopefully someone here can
> >help me understand the following things:
>
> >Note: The image sensor output is an ANALOG signal. Datasheet says that
> >the READOUT clock is 40MHz.
>
> It somewhat depends on whereabouts in the sensor's output
> signal processing chain you expect to pick up the signal.
> Is this a raw sensor chip that you have? =A0Is it hiding
> behind a sensor drive/control chipset? =A0Is it already
> packaged, supplying standard composite video output?
>
>
>
> >1. How is reading of an image sensor using an ADC different then
> >reading a random analog signal using an ADC?
>
> You're right to question this. =A0Of course, at base it isn't -
> it's just a matter of sampling an analog signal. =A0But the image
> sensor has some slightly strange properties. =A0First off, the
> analog signal has already been through some kind of sample-
> and-hold step. =A0In an idealised world, with a 40 MHz readout
> clock, you would expect to see the analog signal "flat" for
> 25ns while it delivers the sampled signal for one pixel,
> and then make a step change to a different voltage for the
> next pixel which again would last for 25ns, and so on.
>
> In the real world, of course, it ain't that simple. =A0First,
> you have the limited bandwidth of the analog signal processing
> chain (inside the image sensor and its support chips) which will
> cause this idealised stair-step waveform to have all manner of
> non-ideal characteristics. =A0Indeed, if the output signal is
> designed for use as an analog composite video signal, then
> it will probably have been through a low-pass filter to remove
> most of the staircase-like behaviour. =A0Second, even before
> the analog signal made it as far as the staircase waveform
> I described, there will be a lot of business about sampling
> and resetting the image sensor's output structures.
>
> In summary, all of this stuff says that you should take
> care to sample the analog signal exactly when the camera
> manufacturer tells you to sample it, with the 40 MHz sample
> clock that they've so thoughtfully provided (I hope!).
>
> > =A0 =A0 =A0And the amount of data or memory required can be calculated
> >using:
> > =A0 =A0 =A0Sampling rate x ADC resolution
>
> > =A0 =A0- This is different in case of an image sensor
>
> Of course it is not different. =A0If you get 16 bits, 40M times
> per second, then you have 640Mbit/sec to handle.
>
> > Do I use an ADC running at 40 MSamples/second since the
> > pixel output 40 MHz ?
>
> If the camera manufacturer gives you a "sampled analog"
> output and a sampling clock, then yes. =A0On the other hand,
> if all you have is a composite analog video output with
> no sampling clock, you are entirely free to choose your
> sampling rate - bearing in mind that it may not match
> up with pixels on the camera, and therefore you are
> trusting the camera's low-pass filter to do a good job
> of the interpolation for you.
>
> > =A0 =A0 =A0How do I calculate the required memory ?
>
> > =A0 =A0 =A0Is it simply 40 MS/s x 16 bits (adc resolution) for each pix=
el
>
> eh? =A0
>
> >or just 16 bits per pixel ?
>
> Only the very highest quality cameras give an output that's worth
> digitising to 16 bit precision. =A010 bits should be enough for
> anyone; 8 bits is often adequate for low-spec applications such
> as webcams and surveillance.
>
> > =A0 =A0 =A0If each frame is 320 x 256 then data per frame is - (320x256=
) x
> >16 bits, why not multiple this by 40 MS/s like
> > =A0 =A0 =A0you would for any other random analog signal ?
>
> I have no idea what you mean. =A040 MHz is the *pixel* rate. =A0Let's
> follow that through:
>
> =A0 40 MHz, 320 pixels on a line - that's 8 microseconds per line.
> =A0 But don't forget to add the extra 2us or thereabouts that will
> =A0 be needed for horizontal synch or whatever. =A0Let's guess 10us
> =A0 per line.
>
> =A0 256 lines per image, 10us per line, that's 2.56 milliseconds per
> =A0 image - but, again, we need to add a margin for frame synch.
> =A0 Perhaps 3ms per image.
>
> =A0 Wow, you're getting 330 images per second - that's way fast.
>
> But whatever you do, if you sample your ADC at 40 MHz then you
> get 40 million samples per second!
>
> ~~~~~~~~~~~~~~~~~~~~~~~
>
> More questions:
>
> What about colour? =A0Or is this a monochrome sensor?
>
> Do you get explicit frame and line synch signals from the
> camera, or must you extract them from the composite
> video signal?
>
> Must you create the camera's internal line, pixel and field
> clocks yourself in the FPGA, or does the camera already have
> clock generators in its support circuitry?
>
> ~~~~~~~~~~~~~~~~~~~~~~
>
> You youngsters have it so easy :-) =A0The first CCD camera
> controller I did had about 60 MSI chips in it, an unholy
> mess of PALs, TTL, CMOS, special-purpose level shifters
> for the camera clocks (TSC426, anyone?), sample-and-hold
> and analog switch devices to capture the camera output,
> some wild high-speed video amplifiers (LM533)... =A0And
> the imaging device itself, from Fairchild IIRC, was only
> NTSC-video resolution and cost around $300. =A0Things have
> moved on a little in the last quarter-century...
> --
> Jonathan Bromley, Consultant
>
> DOULOS - Developing Design Know-how
> VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services
>
> Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
> jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com
>
> The contents of this message may contain personal views which
> are not the views of Doulos Ltd., unless specifically stated.

Guys, Thanks a lot for the help. Jonathan your explanation was
great ...

Answers to the questions you asked -

- Its a monochrome sensor
- I do get explicit frame and line signals from the sensor
- Sensor does not have any clock generating circuitary (I have to
provide the clock, or pixel clock to the sensor, not sure if I was
clear about that in the previous post).

I have a few more questions regarding data storage and processing (I
think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to
the final stage analog signal (thats where I am planing to read it
using an ADC).

The output is actually 4 differential signals (one for each column)
meaning I will need four ADCs (all four video outputs signals come out
simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I
cant go over 133 Meg transfers per second. Since I am reading 4
channels @ 40 MHz that works out to be 160 Mbits per second so not
possible to transfer the data on fly over the bus (unless I am
misunderstanding something). Is there a way to transfer data on the
fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if
I have to store and then use a slower clock to transfer the data out).

Article: 133286
Subject: Re: Xilinx SecureIP simulation and third-party simulators?
From: "HT-Lab" <hans64@ht-lab.com>
Date: Mon, 23 Jun 2008 22:35:07 +0100
Links: << >>  << T >>  << A >>

"SynopsysFPGAexpress" <fpgas@sss.com> wrote in message 
news:JON7k.9710$jI5.7637@flpi148.ffdc.sbc.com...
> Starting with ISE 10.1, has begun migrating some hard-IP simulation models
> from Smartmodel to "SecureIP."  For now, the SecureIP blocks can only
> be simulated in 1 simulator: Modelsim 6.3c (or later)
>
> " AR #30975 - 10.1 SecureIP libraries - Does NCSIM and VCS support Secure 
> IP flow?"
> http://www.xilinx.com/support/answers/30975.htm
> Answer: SecureIP in NCSIM and VCS will be supported starting in ISE 11.1
>
> Ok, so I guess the question is, when is ISE 11.1 planned for release?
> Also, will the Aldec simulators (Active-HDL, Riviera) be supported, too?

I also seems to require a Verilog License(?):

http://www.xilinx.com/support/answers/30481.htm

But then this answer seems to indicate you can use VHDL (look at the 
solution) or at least after 10.1 SP2

http://www.xilinx.com/support/answers/31125.htm

If Verilog is required than I hope that Xilinx is kind enough to support 
both Smartmodels and SecureIP until VHDL 4.0/2008 is supported (support 
encryption in the same way as Verilog),

Hans
www.ht-lab.com



Article: 133287
Subject: XAUI - INTERNAL LOOPBACK SETUP - DRP (DYNAMIC RECONFIGURATION PORT)
From: explore <chethanzmail@gmail.com>
Date: Mon, 23 Jun 2008 14:37:08 -0700 (PDT)
Links: << >>  << T >>  << A >>
I am trying to enable an internal loopback in a xaui core (near-end
PMA loopback) and would like your suggestions on that. I read about
the loopback modes in UG196 and found that the DRP's need to be
modified in order to enable the internal loopback. The chapter on
loopback mentions that the drp address 26[6:3] needs to be set to
'1111'. I tried to set the daddr port to this value, but the internal
loopback does not occur. Also, there is an MDIO interface in the core
and I have read that the loopback mode can be enabled through this.
Could you please throw some light on how I can go about setting-up the
internal loopback please?

Thanks in advance,
Chethan

Article: 133288
Subject: Re: FPGA based database searching
From: Andy <jonesandy@comcast.net>
Date: Mon, 23 Jun 2008 14:57:15 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Jun 23, 1:04 pm, Mike Treseler <mike_trese...@comcast.net> wrote:
> Norman Bollmann wrote:
> > I've got a software implementation in
> > ANSI-C for a complex database searching. The database is a proprietary
> > format where I am saving data, which has to be given as a result, depending
> > on the input data. Problem is, the software implementation is far to slow.
>
> Get a faster server, load linux.
> FPGA's are OK for data filtering or statistics.
> If you need "complex database searching" you
> need a computer.
>
>        -- Mike Treseler

The bottleneck for most searches has to do with how many compares can
be done per unit time, which usually boils down to how fast data can
be brought into the search. Unless you have a significantly faster
mechanism to access the data than the CPU, you probably won't be able
to search it any faster. That's assuming you have the undivided
attention of the CPU. If the CPU is busy doing other things while your
FPGA could be doing the searching, that might improve overall
performance.

Block ram usually won't help much, since you can only access one or
two addresses at a time. If you had a multi-word key, then pulling the
data into a multi-way structure (flops or replicated block rams) such
that the different parts of the key could be compared to multiple data
words in parallel, then you'd be getting somewhere.

Andy

Article: 133289
Subject: Re: XAUI - INTERNAL LOOPBACK SETUP - DRP (DYNAMIC RECONFIGURATION
From: austin <austin@xilinx.com>
Date: Mon, 23 Jun 2008 15:08:27 -0700
Links: << >>  << T >>  << A >>
Chetan,

What device family? V2P, V4, or V5?

Also, there is a serial loopback, and a parallel loopback.  And, if that
is not enough, are we talking near end, or far end which is looping back?

Generally, a parallel or digital loopback checks the function and the
logic (step one of any testing).  Then a near end serial loopback will
check the analog side.  Often the serial loopback must have a good
transmit termination, as reflections may cause errors.  In this sense,
looping back with nothing connected may fail due to reflections (get out
the scope).  Following that, looping back the far end on its parallel
side will successfully check the analog and digital of the near end, and
the analog looped back at the far end.  Then the only thing not checked
is synchronization (far end clock vs near end clock).  For testing the
clocking, you need a mode at the far end to use either the local clock,
or to re-use the received clock.  If you use the far end local clock,
then you also need a loopback at the far end after the receive FIFO,
before the transmit FIFO so that all of the design is tested.]

Loopback testing can be a major task:

http://www.juniper.net/techpubs/software/junos/junos76/swconfig76-network-interfaces/html/interfaces-physical-config28.html

http://www.credence.com/technical-library/open-docs/test-trends_loopback.pdf

Austin

Article: 133290
Subject: Re: Image Sensor Interface.
From: "MikeWhy" <boat042-nospam@yahoo.com>
Date: Mon, 23 Jun 2008 18:02:36 -0500
Links: << >>  << T >>  << A >>
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:xcadnWI25-LcecLVnZ2dnUVZ_tTinZ2d@comcast.com...
> MikeWhy wrote:
> (snip)
>
>> Nyquist relates to sinusoids and periodicity in the signal. The sampling 
>> period as it relates to Nyquist with your image sensor is the frame rate, 
>> not the pixel clock/ADC sample rate. The two are not related in a 
>> meaningful way. Fuhget about it.
>
> Yes, Nyquist is completely unrelated to the signal coming out
> of an image sensor, but it is important in what goes in.
>
> Specifically, the image sensor samples an analog (image) in
> two dimensions, and, for the result to be correct the image itself
> must not have spatial frequencies at the sensor surface higher
> than half the pixel spacing.  Sometimes one trusts the lens to
> do that, others an optical low pass filter is used.

Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is a 
spatial resolution problem, not a spectral aliasing (Nyquist) issue.



Article: 133291
Subject: Re: Image Sensor Interface.
From: "MikeWhy" <boat042-nospam@yahoo.com>
Date: Mon, 23 Jun 2008 19:20:08 -0500
Links: << >>  << T >>  << A >>
"ertw" <gill81@hotmail.com> wrote in message 
news:812c4ac9-d1cf-4a1d-a66b-807aeb0c7359@m45g2000hsb.googlegroups.com...
Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).

=========
A block RAM FIFO comes to mind. Maybe even 4 of them, one for each column 
stream. Search the docs for BRAM.

The frames are small enough, and 33ms is long enough that you likely won't 
need to double buffer. For example, buffering it in larger, slower memory to 
allow for bus contention.



Article: 133292
Subject: Re: virtex-5: can't use DCM (too low input frequency)
From: John_H <newsgroup@johnhandwork.com>
Date: Mon, 23 Jun 2008 19:59:48 -0700
Links: << >>  << T >>  << A >>
John_H wrote:
<snip>
> 
> I second the use of clock enables rather than DFF based divider and
> the XOR phase control method:

Ahem - sorry, frog in my throat...

I second the use of clock enables rather than DFF based divider and I 
also second the XOR phase control method:

I didn't mean to add any confusion  :-)

> If you toggle one flop with 6MHz and reregister that flop at the 24MHz
> clock, the XOR of those two registers will always be asserted the
> clock phase after the 6MHz edge.  Either use that as a "data is valid"
> clock enable for a downstream-only system or use that signal to reload
> a 2-bit counter and decode the timeslot *before* the 6MHz edge for the
> clock enable if you need the I/O to be aligned to the 6MHz clock.
> 
> - John_H

Article: 133293
Subject: Re: virtex-5: can't use DCM (too low input frequency)
From: Peter Alfke <alfke@sbcglobal.net>
Date: Mon, 23 Jun 2008 20:52:47 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Jun 23, 7:59=A0pm, John_H <newsgr...@johnhandwork.com> wrote:
> John_H wrote:
>
> <snip>
>
>
>
> > I second the use of clock enables rather than DFF based divider and
> > the XOR phase control method:
>
> Ahem - sorry, frog in my throat...
>
> I second the use of clock enables rather than DFF based divider and I
> also second the XOR phase control method:
>
> I didn't mean to add any confusion =A0:-)
>
> > If you toggle one flop with 6MHz and reregister that flop at the 24MHz
> > clock, the XOR of those two registers will always be asserted the
> > clock phase after the 6MHz edge. =A0Either use that as a "data is valid=
"
> > clock enable for a downstream-only system or use that signal to reload
> > a 2-bit counter and decode the timeslot *before* the 6MHz edge for the
> > clock enable if you need the I/O to be aligned to the 6MHz clock.
>
> > - John_H

This whole subject looks like a good basis for a creative appnote.
Peter Alfke

Article: 133294
Subject: How to include the Xilnet library in an EDK project?
From: vikram <vikram788@gmail.com>
Date: Mon, 23 Jun 2008 23:25:09 -0700 (PDT)
Links: << >>  << T >>  << A >>
hello....



i use EDK 9.1i, and am trying to use opb ethernet on the XUP Virtex 2
Pro board, with PPC 405...



in order to implement tcp-ip communication between the board and a pc
(windows xp), i wanted to use xilnet.... (i found lwip hard to
understand, being new to all this, and also found an example using
xilnet... link: http://www.eece.unm.edu/xup/ml300ppc405_lab3.htm)



however, i found that the xilnet library is not available as an option
in the software platform settings menu....

(lwip, on the other hand, is available...)

further, when i tried to manually include it in the .mss file, and
generate libraries (libgen), it was removed automatically, and no
files (include or libsrc in the ppc405 folder) were generated..... how
do i use this library?



please reply soon...

thanks



vikram

Article: 133295
Subject: Re: Image Sensor Interface.
From: "MikeWhy" <boat042-nospam@yahoo.com>
Date: Tue, 24 Jun 2008 02:11:47 -0500
Links: << >>  << T >>  << A >>
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:9Nednd9t4eMPDP3VnZ2dnUVZ_jmdnZ2d@comcast.com...
> MikeWhy wrote:
> (snip regarding Nyquist and image sensors)
>
>> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is 
>> a spatial resolution problem, not a spectral aliasing (Nyquist) issue.
>
> It isn't usually as bad as audio, but an image with a very
> high spatial frequency can alias on on image sensor.
> (Usually called Moire for images.  Aliasing can also cause
> color effects based on the pattern of the color filters
> on the sensor.)
>
> http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09

Sure, I've clicked the shutter a few times. I was even around when Sigma 
splatted in the market with the Foveon sensor. All the same, Bayer aliasing 
isn't related to Nyquist aliasing and sampling frequency. The OP needn't 
concern himself with Nyquist considerations. Yes?



Article: 133296
Subject: Re: Image Sensor Interface.
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 23 Jun 2008 23:18:33 -0800
Links: << >>  << T >>  << A >>
MikeWhy wrote:
(snip regarding Nyquist and image sensors)

> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing 
> is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.

It isn't usually as bad as audio, but an image with a very
high spatial frequency can alias on on image sensor.
(Usually called Moire for images.  Aliasing can also cause
color effects based on the pattern of the color filters
on the sensor.)

http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09

-- glen


Article: 133297
Subject: 1D or 2D Placement for dynamically partially reconfigurable
From: grant0920 <grant0920@gmail.com>
Date: Tue, 24 Jun 2008 00:19:28 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi All:

        There are many papers about the 1D or 2D placement. However,
the papers are almost for the algorithm discussion. The current method
for dynamically partially reconfigurable architectures is EAPR flow,
but the DPR blocks need to be defined at design-time by using the ucf
file. All the area constraints are finally included in the static_full
and partial bitstreams. So, I am very confused how 1D and 2D placement
can be applied to a =93real=94 DPR system at run-time. Are there any
research groups who can apply such placement methods in a =93real=94 DPR
architecture? I know that some proposed methods, which contained a
specific filter to re-modify the location information in the
bitstream, can relocate a partial bitstream to a new location.
However, the area sizes of DPR blocks need to be the same. However,
for example, two DPR blocks are implemented at design-time. Is it
possible that the two DPR areas can be merged for placing a partial
bitstream that needs both the resources of two DPR blocks like real 1D
or 2D placement at run-time? Is there any architecture that can freely
be placed the partial bitstreams at run-time? Thanks very much!

Best regards,
Huang

Article: 133298
Subject: Re: Image Sensor Interface.
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Tue, 24 Jun 2008 08:28:27 +0100
Links: << >>  << T >>  << A >>
On Tue, 24 Jun 2008 02:11:47 -0500, "MikeWhy" wrote:

> All the same, Bayer aliasing 
>isn't related to Nyquist aliasing and sampling frequency. The OP needn't 
>concern himself with Nyquist considerations. Yes?

Obviously, from a techology and signal-processing point of view
they live in different worlds.  But I don't really see what's
so different between thinking about spatial frequency and 
thinking about temporal frequency.

But then, part of my problem is that I learned how to think
about the frequency/time or spatial-frequency/distance
duality not through engineering, but physics:  if I want
to understand what a convolution is doing, my first resort
even today is to think about optical transforms.

Absolutely agreed that the OP probably has no control at all
over the spatial bandwidth (MTF) and spatial sampling concerns 
of his image sensor/lens combination.
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 133299
Subject: Re: which commercial HDL-Simulator for FPGA?
From: Petter Gustad <newsmailcomp6@gustad.com>
Date: Tue, 24 Jun 2008 09:47:24 +0200
Links: << >>  << T >>  << A >>
"SynopsysFPGAexpress" <fpgas@sss.com> writes:

> I'm not sure how that compares to Mentor Questasim or Synopsys VCS. 

VCS/DVE (VPD dump files) supports SV datatypes. 

Petter
-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search