Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 152600

Article: 152600
Subject: Re: The Manifest Destiny of Computer Architectures
From: rpw3@rpw3.org (Rob Warnock)
Date: Sat, 17 Sep 2011 05:44:12 -0500
Links: << >>  << T >>  << A >>
<nmm1@cam.ac.uk> wrote:
+---------------
| For example, there are people starting to think about genuinely
| unreliable computation, of the sort where you just have to live
| with ALL parths being unreliable.  After all, we all use such a
| computer every day ....
+---------------

Yes, there are such people, those in the Computational Complexity
branch of Theoretical Computer Science who are working on bounded-error
probabilistic classes, both classical & in quantum computing:

    http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
    Bounded-error probabilistic polynomial
    In computational complexity theory, bounded-error probabilistic
    polynomial time (BPP) is the class of decision problems solvable
    by a probabilistic Turing machine in polynomial time, with an error
    probability of at most 1/3 for all instances.
    ...

    http://en.wikipedia.org/wiki/BQP
    BQP
    In computational complexity theory BQP (bounded error quantum
    polynomial time) is the class of decision problems solvable by
    a quantum computer in polynomial time, with an error probability
    of at most 1/3 for all instances. It is the quantum analogue of
    the complexity class BPP.
    ...

Though the math seems to be way ahead of the hardware currently...  ;-}


-Rob

-----
Rob Warnock		<rpw3@rpw3.org>
627 26th Avenue		<http://rpw3.org/>
San Mateo, CA 94403


Article: 152601
Subject: Re: The Manifest Destiny of Computer Architectures
From: nmm1@cam.ac.uk
Date: Sat, 17 Sep 2011 11:54:05 +0100 (BST)
Links: << >>  << T >>  << A >>
In article <rqSdnb7HzcZh5OnTnZ2dnUVZ_umdnZ2d@speakeasy.net>,
Rob Warnock <rpw3@rpw3.org> wrote:
>
>+---------------
>| For example, there are people starting to think about genuinely
>| unreliable computation, of the sort where you just have to live
>| with ALL parths being unreliable.  After all, we all use such a
>| computer every day ....
>+---------------
>
>Yes, there are such people, those in the Computational Complexity
>branch of Theoretical Computer Science who are working on bounded-error
>probabilistic classes, both classical & in quantum computing:

Please don't associate what I say with the auto-eroticism of those
lunatics.  While there may be some that are better than that, I have
seen little evidence of it in their papers.

The work that I am referring to almost entirely either predates
computer scientists or is being done a long way away from that area.

>    http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
>    Bounded-error probabilistic polynomial
>    In computational complexity theory, bounded-error probabilistic
>    polynomial time (BPP) is the class of decision problems solvable
>    by a probabilistic Turing machine in polynomial time, with an error
>    probability of at most 1/3 for all instances.

The fundamental mathematical defects of that formulation are left as
an exercise for the reader.  Hint: if you are a decent mathematical
probabilist, they will jump out at you.

>    http://en.wikipedia.org/wiki/BQP
>    BQP
>    In computational complexity theory BQP (bounded error quantum
>    polynomial time) is the class of decision problems solvable by
>    a quantum computer in polynomial time, with an error probability
>    of at most 1/3 for all instances. It is the quantum analogue of
>    the complexity class BPP.
>
>Though the math seems to be way ahead of the hardware currently...  ;-}

And the mathematics is itself singularly unimpressive.


Regards,
Nick Maclaren.

Article: 152602
Subject: Re: Virtex 6 dev. board suppliers?
From: "scrts" <mailsoc@[remove@here]gmail.com>
Date: Sat, 17 Sep 2011 19:16:06 +0300
Links: << >>  << T >>  << A >>
<rupertlssmith@googlemail.com> wrote in message 
news:da9aad66-de65-4500-867e-702b2a23e665@m5g2000vbm.googlegroups.com...
> Hi,
>
> I'm looking for a Xilinx Virtex 6 based dev. board, (with PCIe and SFP
> connectors for 10G Ethernet). Other than Hitech Global, what other
> suppliers are there? You suggestions are much appreciated. Thanks.

Avnet? 



Article: 152603
Subject: Registers at I/O
From: valtih1978 <do@not.email.me>
Date: Sat, 17 Sep 2011 20:44:12 +0300
Links: << >>  << T >>  << A >>
Synthesis optimization people seem to like registers at I/O. 
Particularly, in Xilinx manual:

   "The synthesis tools will not optimize across the Partition 
interface. If an asynchronous timing critical path crosses Partition 
boundaries, logic optimizations will not occur across the Partition 
boundary. To mitigate this issue, add a register to the asynchronous 
signal at the Partition boundary."

I like the registers all over design. Though, they speak like it is game 
inject a register in arbitrary place.

Article: 152604
Subject: Re: The Manifest Destiny of Computer Architectures
From: nmm1@cam.ac.uk
Date: Sat, 17 Sep 2011 18:55:31 +0100 (BST)
Links: << >>  << T >>  << A >>
In article <4E74E439.4000107@bitblocks.com>,
Bakul Shah  <usenet@bitblocks.com> wrote:
>
>> Despite a lot of effort over the years, nobody has ever thought of
>> a good way of abstracting parallelism in programming languages.
>
>CSP?

That is a model for describing parallelism of the message-passing
variety (including the use of Von Neumann shared data), and is in
no reasonable sense an abstraction for use in programming languages.

BSP is.  Unfortunately, it is not a good one, though I teach and
recommend that people consider it :-(


Regards,
Nick Maclaren.

Article: 152605
Subject: Re: Registers at I/O
From: Ed McGettigan <ed.mcgettigan@xilinx.com>
Date: Sat, 17 Sep 2011 11:07:32 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 17, 10:44=A0am, valtih1978 <d...@not.email.me> wrote:
> Synthesis optimization people seem to like registers at I/O.
> Particularly, in Xilinx manual:
>
> =A0 =A0"The synthesis tools will not optimize across the Partition
> interface. If an asynchronous timing critical path crosses Partition
> boundaries, logic optimizations will not occur across the Partition
> boundary. To mitigate this issue, add a register to the asynchronous
> signal at the Partition boundary."
>
> I like the registers all over design. Though, they speak like it is game
> inject a register in arbitrary place.

In order to have reliable (deterministic and short) timing at the IO
boundaries you need to have registers in the IO.

The other comment that you referred to is also a very good practice.
Registering an asynchronous input at the boundary will resolve the
asynchronous event to a single clock edge within the module
(metastability concerns aside) so that timing analysis can be done
correctly and so that all parts of the module will "see" the same
value.  Since this is an asynchronous signal and by its definition can
happen at any time adding a register has no practical impact.

These are not absolute rules. You are free to create your design in
any way that you see fit, but when the design isn't stable and
reliable you should remember these design tips.

Article: 152606
Subject: Re: The Manifest Destiny of Computer Architectures
From: Bakul Shah <usenet@bitblocks.com>
Date: Sat, 17 Sep 2011 11:17:29 -0700
Links: << >>  << T >>  << A >>
On 9/17/11 1:44 AM, nmm1@cam.ac.uk wrote:
> In article<270b4f6b-a8e8-4af0-bf4a-c36da1864692@u19g2000vbm.googlegroups.com>,
> Despite a lot of effort over the years, nobody has ever thought of
> a good way of abstracting parallelism in programming languages.

CSP?

Article: 152607
Subject: Re: The Manifest Destiny of Computer Architectures
From: Bakul Shah <usenet@bitblocks.com>
Date: Sat, 17 Sep 2011 12:35:56 -0700
Links: << >>  << T >>  << A >>
On 9/17/11 10:55 AM, nmm1@cam.ac.uk wrote:
> In article<4E74E439.4000107@bitblocks.com>,
> Bakul Shah<usenet@bitblocks.com>  wrote:
>>
>>> Despite a lot of effort over the years, nobody has ever thought of
>>> a good way of abstracting parallelism in programming languages.
>>
>> CSP?
>
> That is a model for describing parallelism of the message-passing
> variety (including the use of Von Neumann shared data), and is in
> no reasonable sense an abstraction for use in programming languages.

I have not seen anything as elegant as CSP & Dijkstra's
Guarded commands and they have been around for 35+ years.

But perhaps we mean different things?  I am talking about
naturally parallel problems.  Here is an example (the first
such problem I was given in an OS class ages ago): S students,
each has to read B books in any order, the school library has
C[i] copies of the ith book.  Model this with S student
processes and a librarian process! As you can see this is
an allegory of a resource allocation problem.

It is easy to see how to parallelize an APL expression like
"F/(V1 G V2)", where scalar functions F & G take two args.
[In Scheme: (vector-fold F (vector-map G V1 V2))].  You'd have
to know the properties of F & G to do it right but potentially
this can be compiled to run on N parallel cores and these N
pieces will have to use message passing. I would like to be
able to express such decomposition in the language itself.

So you will have to elaborate why and how CSP is not a
reasonable abstraction for parallelism. Erlang, Occam & Go use
it! Go's channels and `goroutines' are easy to use.

Article: 152608
Subject: Re: The Manifest Destiny of Computer Architectures
From: nmm1@cam.ac.uk
Date: Sun, 18 Sep 2011 08:38:24 +0100 (BST)
Links: << >>  << T >>  << A >>
In article <4E74F69C.5080009@bitblocks.com>,
Bakul Shah  <usenet@bitblocks.com> wrote:
>>>
>>>> Despite a lot of effort over the years, nobody has ever thought of
>>>> a good way of abstracting parallelism in programming languages.
>>>
>>> CSP?
>>
>> That is a model for describing parallelism of the message-passing
>> variety (including the use of Von Neumann shared data), and is in
>> no reasonable sense an abstraction for use in programming languages.
>
>I have not seen anything as elegant as CSP & Dijkstra's
>Guarded commands and they have been around for 35+ years.

Well, measure theory is also extremely elegant, and has been around
for longer, but is not a usable abstraction for programming.

>But perhaps we mean different things?  I am talking about
>naturally parallel problems.  Here is an example (the first
>such problem I was given in an OS class ages ago): S students,
>each has to read B books in any order, the school library has
>C[i] copies of the ith book.  Model this with S student
>processes and a librarian process! As you can see this is
>an allegory of a resource allocation problem.

Such problems are almost never interesting in practice, and very
often not in theory.  Programming is about mapping a mathematical
abstraction of an actual problem into an operational description
for a particular agent.

Perhaps the oldest and best established abstraction for programming
languages is procedures, but array (SIMD) notation and operations
are also ancient, and are inherently parallel.  However, 50 years
of experience demonstrates that they are good only for some kinds
of problem and types of agent.


Regards,
Nick Maclaren.

Article: 152609
Subject: Re: Registers at I/O
From: valtih1978 <do@not.email.me>
Date: Sun, 18 Sep 2011 13:50:46 +0300
Links: << >>  << T >>  << A >>
You speak like about primary I/O. Yet, partitions are blocks of the same 
FPGA design. They are under full control of the tools. Do you mean that 
partitions treated as designs, absolutely external to each other?

Thank you.

Article: 152610
Subject: How to digitize the VGA output using FPGA?
From: Test01 <cpandya@yahoo.com>
Date: Sun, 18 Sep 2011 07:49:30 -0700 (PDT)
Links: << >>  << T >>  << A >>
I would like to know if there is a development kit and documentation
available to digitize the VGA siganls to create the a digital video
fram using FPGA.

I see lot of fpga applications that generate VGA output but I am
looking for an application that can take VGA input.  This may require
external A/D converters.

Thanks.

Article: 152611
Subject: Re: How to digitize the VGA output using FPGA?
From: "scrts" <mailsoc@[remove@here]gmail.com>
Date: Sun, 18 Sep 2011 18:57:16 +0300
Links: << >>  << T >>  << A >>

"Test01" <cpandya@yahoo.com> wrote in message 
news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com...
>I would like to know if there is a development kit and documentation
> available to digitize the VGA siganls to create the a digital video
> fram using FPGA.
>
> I see lot of fpga applications that generate VGA output but I am
> looking for an application that can take VGA input.  This may require
> external A/D converters.
>
> Thanks.

Check here:
http://www.analog.com/en/analog-to-digital-converters/video-decoders/products/index.html
http://focus.ti.com/paramsearch/docs/parametricsearch.tsp?family=analog&familyId=375&uiTemplateId=NODE_STRY_PGE_T 



Article: 152612
Subject: Re: How to digitize the VGA output using FPGA?
From: Test01 <cpandya@yahoo.com>
Date: Sun, 18 Sep 2011 11:06:12 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote:
> "Test01" <cpan...@yahoo.com> wrote in message
>
> news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com...
>
> >I would like to know if there is a development kit and documentation
> > available to digitize the VGA siganls to create the a digital video
> > fram using FPGA.
>
> > I see lot of fpga applications that generate VGA output but I am
> > looking for an application that can take VGA input. =A0This may require
> > external A/D converters.
>
> > Thanks.
>
> Check here:http://www.analog.com/en/analog-to-digital-converters/video-de=
coders/...http://focus.ti.com/paramsearch/docs/parametricsearch.tsp?family=
=3Danal...

Thanks for the link.  More importantly, is there FPGA development
board with this analog chips that I can use for development purposes?

Ideally I am looking for a board that can take in 15 pin VGA connector
as input.  This iis analog RGB signs get converted into 24 bit digital
output and is fed to the FPGA with DDR3 interface.


Article: 152613
Subject: Re: How to digitize the VGA output using FPGA?
From: Test01 <cpandya@yahoo.com>
Date: Sun, 18 Sep 2011 11:35:41 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 18, 1:06=A0pm, Test01 <cpan...@yahoo.com> wrote:
> On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote:
>
> > "Test01" <cpan...@yahoo.com> wrote in message
>
> >news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com...
>
> > >I would like to know if there is a development kit and documentation
> > > available to digitize the VGA siganls to create the a digital video
> > > fram using FPGA.
>
> > > I see lot of fpga applications that generate VGA output but I am
> > > looking for an application that can take VGA input. =A0This may requi=
re
> > > external A/D converters.
>
> > > Thanks.
>
> > Check here:http://www.analog.com/en/analog-to-digital-converters/video-=
decoders/......
>
> Thanks for the link. =A0More importantly, is there FPGA development
> board with this analog chips that I can use for development purposes?
>
> Ideally I am looking for a board that can take in 15 pin VGA connector
> as input. =A0This iis analog RGB signs get converted into 24 bit digital
> output and is fed to the FPGA with DDR3 interface.


There hardware from bittec/altera at this link that takes in DVI 29
pin connector.  It seems to include analog RGB signal inputs also.  So
in that case DVI connector is superset of VGA connector And this
particular board can digitize the VGA video. In other words I should
be able to use this board as a reference board.  Does that make sense?

http://www.bitec.ltd.uk/hsmc_dvi_1080p_csc_c120.pdf

Thanks for your help.

Article: 152614
Subject: Re: Xilinx Tin Whiskers ?
From: Jon Elson <elson@pico-systems.com>
Date: Sun, 18 Sep 2011 17:18:53 -0500
Links: << >>  << T >>  << A >>
Jon Elson wrote:

Hmmm, one additional tidbit.  Some boards reflowed at the
same time have been stored in a lab environment.  These boards
in question were stored in my basement for six months.  The lab env. boards
show no sign of the whiskers.  Conditions in my basement are
not bad at all, but it is likely more humid down there than
in the lab.  So, I guess this means don't store lead-free
boards in humid conditions.

Jon

Article: 152615
Subject: Re: Xilinx Tin Whiskers ?
From: nico@puntnl.niks (Nico Coesel)
Date: Sun, 18 Sep 2011 22:31:11 GMT
Links: << >>  << T >>  << A >>
Jon Elson <elson@pico-systems.com> wrote:

>Jon Elson wrote:
>
>Hmmm, one additional tidbit.  Some boards reflowed at the
>same time have been stored in a lab environment.  These boards
>in question were stored in my basement for six months.  The lab env. boards
>show no sign of the whiskers.  Conditions in my basement are
>not bad at all, but it is likely more humid down there than
>in the lab.  So, I guess this means don't store lead-free
>boards in humid conditions.

IMHO this is the wrong solution. Actually it is not a solution at all.
You really should get in touch with someone who has experience in this
field in order to solve the problem at the root.

-- 
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

Article: 152616
Subject: Re: The Manifest Destiny of Computer Architectures
From: Andrew Reilly <areilly---@bigpond.net.au>
Date: 18 Sep 2011 23:31:15 GMT
Links: << >>  << T >>  << A >>
On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:

> Despite a lot of effort over the years, nobody has ever thought of a
> good way of abstracting parallelism in programming languages.

That's not really all that surprising though, is it?  Hardware that 
exhibits programmable parallelism has taken many different forms over the 
years, especially with many different scales of granularity of the 
parallelisable sequential operations and inter-processor communications, 
The entire issue of parallelism is essentially orthogonal to the 
sequential Turing/von-Neuman model of computation that is at the heart of 
most programming languages.  It's not obvious (to me) that a single 
language could reasonably describe a problem and have it map efficiently 
across "classical" cross-bar shared memory systems (including barrel 
processors), NuMA shared memory, distributed shared memory, clusters, and 
clouds (the latter just an example of the dynamic resource count vs known-
at-compile-time axis) all of which incorporate both sequential and vector 
(and GPU-style) resources.

Which is not to say that such a thing can't exist.  My expectation is 
that it will wind up being something very functional in shape that 
relaxes as many restrictions on order-of-execution as possible (including 
order of argument evaluation), sitting on top of a dynamic execution 
environment that can compile and re-compile code and shift it around in 
the system to match the data that is observed at run-time.  

That is: the language can't assume a Turing model, but rather a more 
mathematical or declarative one.  The compiler has to choose where 
sequential execution can be applied, and where that isn't appropriate.

Needless to say, we're not there yet, but I expect to see it in the next 
dozen or so years.

Cheers,

-- 
Andrew

Article: 152617
Subject: Re: The Manifest Destiny of Computer Architectures
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 19 Sep 2011 01:03:52 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga Andrew Reilly <areilly---@bigpond.net.au> wrote:
> On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:

>> Despite a lot of effort over the years, nobody has ever thought of a
>> good way of abstracting parallelism in programming languages.

> That's not really all that surprising though, is it?  Hardware that 
> exhibits programmable parallelism has taken many different forms over the 
> years, especially with many different scales of granularity of the 
> parallelisable sequential operations and inter-processor communications, 

Yes, but programs tend to follow the mathematics of matrix algebra.

A language that allowed for parallel processing of matrix operations,
independent of the underlying hardware, should help.

Note that both the PL/I and Fortran array assignment complicate
parallel processing.  In the case of overlap, where elements changed
in the destination can later be used in the source, PL/I requires
that the new value be used (as if processed sequentially), where
Fortran requires that the old value be used (a temporary array may
be needed).  The Fortran FORALL conveniently doesn't help much.

A construct that allowed the compiler (and parallel processor) to
do the operations in any order, including a promise that no aliasing
occurs, and that no destination array elements are used in the source,
would, it seems to me, help.

Maybe even an assignment construct that allowed for a group of
assignments (presumably array assignments) to be executed, allowing
the compiler to do them in any order, again guaranteeing no aliasing
and no element reuse.

> The entire issue of parallelism is essentially orthogonal to the 
> sequential Turing/von-Neuman model of computation that is at the heart of 
> most programming languages.  It's not obvious (to me) that a single 
> language could reasonably describe a problem and have it map efficiently 
> across "classical" cross-bar shared memory systems (including barrel 
> processors), NuMA shared memory, distributed shared memory, clusters, and 
> clouds (the latter just an example of the dynamic resource count vs known-
> at-compile-time axis) all of which incorporate both sequential and vector 
> (and GPU-style) resources.

Well, part of it is that we aren't so good at thinking of problems
that way.  Us (people) like to think things through one step at a
time, and von-Neumann allows for that.

> Which is not to say that such a thing can't exist.  My expectation is 
> that it will wind up being something very functional in shape that 
> relaxes as many restrictions on order-of-execution as possible (including 
> order of argument evaluation), sitting on top of a dynamic execution 
> environment that can compile and re-compile code and shift it around in 
> the system to match the data that is observed at run-time.  

> That is: the language can't assume a Turing model, but rather a more 
> mathematical or declarative one.  The compiler has to choose where 
> sequential execution can be applied, and where that isn't appropriate.

> Needless to say, we're not there yet, but I expect to see it in the next 
> dozen or so years.

In nuclear physics there is a constant describing the number of years
until viable nuclear fusion power plants can be built.  It is a
constant in that it seems to always be (about) that many years in the
future.    (I believe it is about 20 or 30 years, but I can't find a
reference.)

I wonder if this dozen years is also a constant.  People have been
working on parallel programming for years, yet usable programming
languages are always in the future.

-- glen


Article: 152618
Subject: Re: Virtex 6 dev. board suppliers?
From: Bryan <bryan.fletcher@avnet.com>
Date: Sun, 18 Sep 2011 18:09:19 -0700 (PDT)
Links: << >>  << T >>  << A >>
> Avnet?

The Avnet-designed Virtex-6 LX130T Evaluation Kit is no longer
available.  The ML605 has PCIe and SFP.
  www.xilinx.com/ml605

Bryan

Article: 152619
Subject: Re: The Manifest Destiny of Computer Architectures
From: Bakul Shah <usenet@bitblocks.com>
Date: Sun, 18 Sep 2011 18:26:45 -0700
Links: << >>  << T >>  << A >>
On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote:
> In article<4E74F69C.5080009@bitblocks.com>,
> Bakul Shah<usenet@bitblocks.com>  wrote:
>>
>> I have not seen anything as elegant as CSP&  Dijkstra's
>> Guarded commands and they have been around for 35+ years.
>
> Well, measure theory is also extremely elegant, and has been around
> for longer, but is not a usable abstraction for programming.

Your original statement was
 > Despite a lot of effort over the years, nobody has ever thought of
 > a good way of abstracting parallelism in programming languages.

I gave some counter examples but instead of responding to that,
your bring in some random assertion. If you'd used Erlang or Go and
had actual criticisms that would at least make this discussion
interesting. Ah well.

Article: 152620
Subject: Re: Xilinx Tin Whiskers ?
From: Jon Elson <elson@pico-systems.com>
Date: Sun, 18 Sep 2011 21:30:41 -0500
Links: << >>  << T >>  << A >>
Nico Coesel wrote:


> 
> IMHO this is the wrong solution. Actually it is not a solution at all.
> You really should get in touch with someone who has experience in this
> field in order to solve the problem at the root.
> 
You have to understand this is a REALLY small business.  I have an
old Philips pick & place machine in my basement, and reflow the boards
in a toaster oven, with a thermocouple reading temperature of the boards.
I can't afford to have a $3000 a day consultant come in, and they'd just
laugh when they saw my equipment.

I could go to an all lead-free process, but these boards have already been
made with plain FR-4 and tin-lead finish.  As for getting tin/lead parts,
that is really difficult for a number of the components.

And, I STILL don't know why this ONE specific part is the ONLY one to show
this problem.  I use a bunch of other parts from Xilinx with no whiskers,
as well as from a dozen other manufacturers.

Jon

Article: 152621
Subject: Re: The Manifest Destiny of Computer Architectures
From: Robert Myers <rbmyersusa@gmail.com>
Date: Sun, 18 Sep 2011 23:18:00 -0400
Links: << >>  << T >>  << A >>
On 9/18/2011 9:03 PM, glen herrmannsfeldt wrote:

> In comp.arch.fpga Andrew Reilly<areilly---@bigpond.net.au>  wrote:
>> On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote:
>
>>> Despite a lot of effort over the years, nobody has ever thought of a
>>> good way of abstracting parallelism in programming languages.
>
>> That's not really all that surprising though, is it?  Hardware that
>> exhibits programmable parallelism has taken many different forms over the
>> years, especially with many different scales of granularity of the
>> parallelisable sequential operations and inter-processor communications,
>
> Yes, but programs tend to follow the mathematics of matrix algebra.
>

Spoken like someone who would know the difference between covariant and 
contravariant and wouldn't blink at a Christoffel symbol.

This is the "crystalline" memory structure that has so obsessed me.  All 
of the most powerful mathematical disciplines would at one time have fit 
pretty well into this paradigm.

As Andy Glew commented, after talking to some CFD people, maybe the most 
natural structure is not objects like vectors and tensors, but something 
far more general.  Trees (graphs) are important, and they can express a 
much more general class of objects than mutlidimensional arrays.  The 
generality has an enormous price, of course

<snip>

>
> I wonder if this dozen years is also a constant.  People have been
> working on parallel programming for years, yet usable programming
> languages are always in the future.
>


At least one and possibly more generations will have to die off.  At one 
time, science and technology progressed slowly enough that the tenure of 
senior scientists and engineers was not an obvious obstacle to progress. 
  Now it is.

Robert.

Article: 152622
Subject: Re: The Manifest Destiny of Computer Architectures
From: Andrew Reilly <areilly---@bigpond.net.au>
Date: 19 Sep 2011 05:02:41 GMT
Links: << >>  << T >>  << A >>
On Sun, 18 Sep 2011 18:26:45 -0700, Bakul Shah wrote:

> On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote:
>> In article<4E74F69C.5080009@bitblocks.com>,
>> Bakul Shah<usenet@bitblocks.com>  wrote:
>>>
>>> I have not seen anything as elegant as CSP&  Dijkstra's Guarded
>>> commands and they have been around for 35+ years.
>>
>> Well, measure theory is also extremely elegant, and has been around for
>> longer, but is not a usable abstraction for programming.
> 
> Your original statement was
>  > Despite a lot of effort over the years, nobody has ever thought of a
>  > good way of abstracting parallelism in programming languages.
> 
> I gave some counter examples but instead of responding to that, your
> bring in some random assertion. If you'd used Erlang or Go and had
> actual criticisms that would at least make this discussion interesting.
> Ah well.

I've read the language descriptions of Erlang and Go and think that both 
are heading in the right direction, in terms of practical coarse-grain 
parallelism, but I doubt that there is a compiler (for any language) that 
can turn, say, a large GEMM or FFT problem expressed entirely as 
independent agents or go-routines (or futures) into cache-aware vector 
code that runs nicely on a small-ish number of cores, if that's what you 
happen to have available.  It isn't really a question of language at all: 
as you say, erlang, go and a few others already have quite reasonable 
syntaxes for independent operation.  The problem is one of compilation 
competence: the ability to decide/adapt/guess vast collections of 
nominally independent operations into efficient arbitrarily sequential 
operations, rather than putting each potentially-parallel operation into 
its own thread and letting the operating system's scheduler muddle 
through it at run-time.

Cheers,

-- 
Andrew

Article: 152623
Subject: Re: clock enable for fixed interval
From: backhus <goouse99@googlemail.com>
Date: Sun, 18 Sep 2011 22:48:41 -0700 (PDT)
Links: << >>  << T >>  << A >>
On 16 Sep., 21:14, Jim <james.kn...@gmail.com> wrote:
> > Hi Jim,
> > using clock enables for multirate systems is a proper way, but you are
> > trying to do it unneccessary complicated.
> > It is much simpler.
>
> > You have a master clock, and a counter that provides the neccessary
> > frequency division.
> > So far so good.
> > Now you only need to create an impulse for a single clock period.
> > This can be done like this:
>
> > =A0 =A0 clock_divider_counter_proc: process (reset, clock)
> > =A0 =A0 begin
> > =A0 =A0 =A0 =A0 if reset =3D '1' then
> > =A0 =A0 =A0 =A0 =A0 =A0 count <=3D (others =3D> '0');
> > =A0 =A0 =A0 =A0 =A0 =A0 clock_divided_i <=3D '0';
> > =A0 =A0 =A0 =A0 elsif rising_edge(clock) then
> > =A0 =A0 =A0 =A0 =A0 =A0 count <=3D count + 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 -- clock_divided_i <=3D count(4); -- this would=
 be too long
> > =A0 =A0 =A0 =A0 =A0 =A0 if count =3D "11111" then
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0clock_enable_clock_divided =A0<=3D 1;
> > =A0 =A0 =A0 =A0 =A0 =A0else
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 clock_enable_clock_divided =A0<=3D 0;
> > =A0 =A0 =A0 =A0 =A0 =A0end if;
> > =A0 =A0 =A0 =A0 end if;
> > =A0 =A0 end process;
> > end behavioral;
>
> > That's all you need.
> > When assigning to =A0clock_enable_clock_divided the clock to output
> > delay and routing delay are sufficient to
> > keep the signal valid beyond the next rising clock edge.
> > (If that wouldn't work this way pipelining data from one register to
> > the next wouldn't work too, but it does.)
>
> > Have a nice synthesis
> > =A0 =A0Eilert
>
> Eilert,
>
> Thanks for the quick response. =A0After I posted, I read that FPGAs
> typically have 0 hold times so your approach seems great. =A0Thanks for
> the help.

Hi Jim,
well, it should be good, because it's been recommended in some XILINX
papers and used in their System Generator tool as the default method
for multirate systems. :-)

Have a nice synthesis
  Eilert

Article: 152624
Subject: Re: How to digitize the VGA output using FPGA?
From: backhus <goouse99@googlemail.com>
Date: Sun, 18 Sep 2011 22:52:44 -0700 (PDT)
Links: << >>  << T >>  << A >>
On 18 Sep., 20:06, Test01 <cpan...@yahoo.com> wrote:
> On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote:
>
> > "Test01" <cpan...@yahoo.com> wrote in message
>
> >news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com...
>
> > >I would like to know if there is a development kit and documentation
> > > available to digitize the VGA siganls to create the a digital video
> > > fram using FPGA.
>
> > > I see lot of fpga applications that generate VGA output but I am
> > > looking for an application that can take VGA input. =A0This may requi=
re
> > > external A/D converters.
>
> > > Thanks.
>
> > Check here:http://www.analog.com/en/analog-to-digital-converters/video-=
decoders/......
>
> Thanks for the link. =A0More importantly, is there FPGA development
> board with this analog chips that I can use for development purposes?
>
> Ideally I am looking for a board that can take in 15 pin VGA connector
> as input. =A0This iis analog RGB signs get converted into 24 bit digital
> output and is fed to the FPGA with DDR3 interface.

Hi,
there are some Boards available from Xilinx. e.g. the ML506.
This board provides the 15pin VGA in and a DVI-I for output purposes
(Analog & Digital).

Of course, this board is not cheap.

Have a nice synthesis
  Eilert



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search