Science topics: Mathematics

Science topic

# Mathematics - Science topic

Mathematics, Pure and Applied Math

Questions related to Mathematics

In its modern form, mathematical research is distinguished by the attribute of rigor: a claim is not accepted as a "theorem" even if experience suggests it is quite accurate for practical purposes. In particular, an argument supporting a claim does not constitute a "proof" unless it is completely airtight in relation to the assumptions. Does this practice advance or hinder progress of Science in general, and can rigor be useful or even implemented in other fields?

It was always thought impossible to have a closed-form formula that can calculate an arbitrary Nth digit of pi, until Borwein produced a formula in base 16 in the mid-1990s. My question is why is it only possible in base 16 and what is so special about "16"?

Have formulas for the Nth digit of other transcendental numbers (eg. e) been produced yet? Are these always in base 16, or do they require other bases? What is the status of this research on transcendental numbers? How many now have formulas for their Nth digit and in what base?

May we generalize a formula to determine the cube roots of any arithmetic number?

It is interesting how maths is useful for describing the physical world. But are there any branches of mathematics that are totally useless for physics? Why? Could it be that we perhaps anthropocentrically chose to follow branches of math that are interesting to us (ie. could have possible application)? To prove a point, could we invent a branch of math that is totally useless?

Could we come up with a sophisticated group theory for the game of chess? Is the reason no one has attempted that because it would be in fact utterly useless with an unexciting loss of generality?

I would like to know the reason that a person who is interested in studying a language by means of computers should study matrices and linear algebra.

All our applied mathematics has evolved under the assumption of continuity, while all basic processes in Nature appear to be 'discrete', meaning non-continuous. (The prototypical example of continuous model is the inner-product vector space, of which the Hilbert space of QM is a special case.)

However, instead of looking for a fundamentally new formalism, or formal language, explicating the observed and formally unfamiliar discreteness, for some *irrational* (but quite human) reasons most physicists believe that one can somehow 'save' the conventional formalism by "discretizing" it . Well, the bad news is that, from a formal point of view, this does not make sense: you cannot do this to any formalism without destroying its integrity (that is how formalisms are structured). Heisenberg has a paper on this topic.

From the experimental point of view, since *all* our measurement instruments are discrete, we have no obvious way to verify the continuity hypothesis.

So to compete with the continuous formalism, we need a fundamentally new formalism to tell us how to interpret and to deal with the discovered ubiquitous 'discreteness'. Obviously, the new formalism must offer some radically new insights into the nature of reality, together with the new formal tools that should allow us to see physical processes in a completely new light, eliminating, in particular, the unacceptable wave-particle duality.

(Nevertheless, it may turn out later that some 'surrogate' form of *spatial* continuity is valid but not as a basic *underlying* model.)

*****************************************************************************************

For convenience, I will be collecting here just some of the relevant quotes by prominent scientists from the answers below:

1. "If you envisage the development of physics in the last half-century, you get the impression that the discontinuous aspect of nature has been forced upon us very much against our will. We seemed to feel quite happy with the continuum. Max Planck was seriously frightened by the idea of a discontinuous exchange of energy ... Twenty-five years later the inventors of wave mechanics indulged for some time in the fond hope that they have paved the way of return to a classical continuous description, but again the hope was deceptive. Nature herself seemed to reject continuous description …

The observed facts (about particles and light and all sorts of radiation and their mutual interaction) appear to be repugnant to the classical ideal of continuous description in space and time. ... So the facts of observation are irreconcilable with a continuous description in space and time …" (Schrödinger, 1951)

[I want to comment that we don't "feel quite happy with the continuum": we simply have not had any other formalism to compete with it.]

2. "As is well known, physics became a science only after the invention of differential calculus. It was after realizing [rather postulating] that natural phenomena are continuous that attempts to construct abstract models were successful. …

True basic [physical] laws can only hold in the small and must be formulated as partial differential equations. Their integration provides the laws for extended parts of time and space." (Riemann)

[But now it seems most likely that "the basic laws" do not "hold in the small", and hence the logic of the continuous model is broken]

3. "You have correctly grasped the drawback that the continuum brings. . . . The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem, [a construction] which corresponds to nothing "real". But we still lack the mathematical structure unfortunately. How much have I already plagued myself in this way!" (Einstein, 1916 letter, quoted from the paper by John Stachel mentioned in my Dec. 6 answer)

4. "Newton thought that light was made up of particles---he called them "corpuscles"---and he was right . . . We know that light is made of particles because we can take a very sensitive instrument that makes clicks when light shines on it, and if the light gets dimmer, the clicks remain just as loud---there are just fewer of them. Thus light is something like raindrops---each little lump of light is called a photon---and if the light is all one color, all the "raindrops" are the same size.

I want to emphasize that light comes in this form---particles. It is very important to know that light behaves like particles, especially for those of you who have gone to school, where you were probably told something about light behaving like waves. I'm telling you the way it *does* behave---like particles." (Feynman, QED, 1985)

I have an infinite set of functions F = {f1, f2, ...} mapping Rn into itself. I know that this set has a group structure with respect to composition, i.e. for every fi and fj in F there exist fk in F, such that fi * fj = fk (* stands for the composition). There is unique fe which corresponds to the group unity and for every fi there is inverse: fi * fj = fj * fi = fe.

I guess that this set of functions defines a Lie group, however I don't know the number of its parameters and the indeces have no topological meaning. Is there any way to find the number of parameters and to introduce them so that the family of functions F would be smooth with respect to those parameters? My first guess was to introduce a metric on F, but I don't know how to do that. Any help would be highly appreciated.

In some countries (including some universities in India) a "higher" form of doctoral work was awarded a D.Sc degree. It was considered in general to be "superior" degree to Ph.D. as it was presumed that the work was done by the candidate unaided by a supervisor. Are there similar programs in other countries?

Math can generate material reality as follows :

1- Through geometry: in his book " The Shape of Inner Space: The Universe's Hidden Dimensions", Shing-Tung Yau shows how pure geometry can generate material universes

2- Proof by contradiction: why did a physical universe arise in the first place: break down the question into 2 separate questions:

2a- First, if there is absolute nothingness, i.e. if there had never been a universe the way we know it to begin with, could it (nothingness) have gone on unimpeded,

and

2b- Given that something now exists, or equivalently has existed at some point, can it ever be that full nothingness can re-establish itself at some point.

Tackling 2b first - There is no credible mechanism for full dissipation of what is (at any arbitrary point in time when something happens to be).

The only way to reach full nothingness would be to dilute everything to a fully vanishing point, but then because of infinitesimal residuals all the conditions extant to create at least one element term of the Heisenberg equation would be there - thus immediately giving rise to quantum foam, which would by definition not be pure nothingness. It's too late for pure nothingness to ever exist

Which leads us to 2a: Could there ever have been full nothingness.

Mathematics demonstrates that the Heisenberg relationship is inescapable in any universe, i.e. including a void universe (its ensemble of conjugate attributes being a perhaps infinite collection of 2-element sets). Heisenberg will however always give rise to 'quantum foam' - i.e., to something.

Therefore the question can be equivalently reformulated as : Can it be that abstract mathematics cannot exist independently of some material support.

Or equivalently: Can it be that abstract mathematics is not fully abstract. This seems to be a logical inconsistency, and therefore it must be answered in the negative.

Therefore, going back up the chain, we can safely conclude that pure nothingness cannot possibly exist.

Our Universe may be regarded as experimental proof

Question: is there any other mechanism, NOT reducible to pure math (e.g., not a quantum fluctuation), capable of generating a material universe ex nihilo ?

It is said that if you want to make high quality softwares than you must know data structures

I am new to chaotic systems and have a question about Lyapunov exponents as a measurement for quantifying chaos. It is mentioned in chaos text books that positive Lyapunov exponent means chaos in the system. While this seems not exactly true, since for example an unstable system also can lead to positive Lyapunov exponent (other than positive eigen values). For example both logistic systems {x(n+1)=r*x(n)*(1-x(n)) with r=1.9} and an unstable system {for example x(n+1)=r*x(n) with r>1} lead to positive Lyapunov exponent.

Can anybody explain the difference between these two? Is there a better measurement tool than lyapunov exponent for chaotic systems?

Curvature is the main determinant of ripening processes in nanoscale materials. Morphologies with mimimal and constant mean curvature can be ripening resistant.

I'm working on a problem where I would need an error correcting code of length at least 5616 bits (and preferably not much longer) that could achieve correct decoding even if 40% of the received word is erroneous. I have looked into some basic coding theory textbooks, but I have not found anything that would suit my purpose. Does such a code exist? If it does, what kind of code is it and can it be efficiently realized? If it doesn't, why not?

Any insight to the problem will be much appreciated.

The approximate solution that satisﬁes the boundary conditions is:

The below equation should be minimized:

So we have this integral:

Initially, for no deﬂection, for w = 0, the value of λ from the above equation, is found as:

I can not understand, how λ is calculated.

This is the qustion, how λ is calculated?

Boolean Function simplification required (to minimum no of literals)!

F=w'x(z'+yz)+y(ww+w'yz)..?

While demonstrating importance of infinity he gave an example of a hotel with infinite rooms. This Hotel is called Hilbert’s hotel. One night a guest, who is a mathematician, appears in the reception of the hotel and requests for a room. The receptionist tells him that all the rooms in the hotel are full. Generally hotel room numbers are natural numbers and start with 1, 2, 3 etc. The mathematician guest thinks for a while and tells the receptionist about possibility of giving him a room without asking any of the occupants to leave the hotel. Receptionist wondered and asked the mathematician to explain. The mathematician says that it is simple. You can ask occupants to shift to next room. For instance you can shift occupant of room no. 1 to room no. 2, room no. 2 to room no. 3 ….so on….and occupant of room no. (n) to room no. (n+1). Give me room no. 1. Since there are infinitely many rooms in Hilbert’s hotel this works.

Now, what is two guests appear in the Hilbert's hotel which is full and ask to get accommodated without asking previous occupants to move out?

What if a group of (n) guests appear in Hilbert's hotel which is full and ask to get accommodated without asking previous occupants to leave the hotel?

We are working on coupled fibonacci sequences of higher orders.

I am defining algebra over smooth functions.

So what is the importance of leibniz rule over there?

I tried some ready package as Mathematica and Maple but the results were not satisfactory

Newton introduced differential equations to physics, some 200 years ago. Later Maxwell added his own set. We also have Navier-Stokes equation, and of course - Schroedinger equation. All they were big steps in science, no doubts. But I feel uneasy, when I see, for example in thermodynamics,

differentiation with respect to the (discrete!) number of particles. That's clear abuse of a beautiful and well established mathematical concept - yet nobody complains or even raises this question. Our world seems discrete (look at STM images if you don't like XIX-th century Dalton's law), so perhaps we need some other mathematical tool(s) to describe it correctly? Maybe graph theory?

Is there any relationship between Walsh functions and rationalized Haar functions?

Can we express a Walsh functions in terms of Haar functions?

Consider the basic arithmetical operations of addition, subtraction, multiplication and division.

Can the sum H_n = 1+1/2+1/3+..+1/n be expressed in number of these basic operations that does not depend on n ? Supposedly this can not be done but what is the proof ?

For example the sum:

S_n = 1+2+3+...+n with n-1 additions can be expressed compactly by Gauss formula as n*(n+1)/2 where the number of operations is three and (so it does not depend on n).

I need a similar thing for H_n, or a proof that this can not be done.

I am trying to cover a sphere (surface of a ball in 3D) with identical unilateral triangles. At first step an octahedron is inscribed into a (unit) sphere. Its faces make the first generation of 8 triangles, obviously unilateral. Next generation triangles are created as follows: centers of edges of selected "old" triangle are found and projected onto the sphere. This replaces an old triangle with 4 new, smaller ones. The procedure is repeated for all "old" triangles, producing in effect the new

generation of triangles. Unexpectedly, new triangles are no longer unilateral, only 2 edges are of equal length (after, say, 5th generation is created). The observed length differences, of order of 6-10%, seem far too large to blame rounding errors as their source. So, what am I doing wrong?

All calculations are done exclusively in Cartesian coordinates.

The equations are like this: 1-X=exp(-A*Y^B)

I need to derive the parameters A and B and use the same model to predict the values, which should be compared with the experimental observations.

The solved equation is: Ln (X) = {Ln(-Ln(1-X))-Ln (A) }/ B

In terms of computation, what is the difference between a diagonal matrix (for which the nonzero terms are in the diagonal: tridiagonal, pentadiagonal, ... and all the other elements are equal to zero) and a hollow matrix (for which almost all the elements are null exept a few ones?

Which is the best matrix to use for computation? Which one performs well for calculus?

y= -x/(a^2-x^2)

what is dy/dx?

where a is a constant

if there is any nice mathematicians , please help me out

Can you suggest some (real-life) applications of Computable Analysis?

Why don't we just use classical analysis?

The work studies the distribution of primes via multiplication modulo n where n is a primorial. Instead of studying the set members I examine the member spacing and show it meets the Hardy Littlewood gap conjectures. It seems straight forward to me but I would like to know if it is actually correct and worth further pursuit. Thank you in advance.

Can a non-constant function V exist such that for every N-tuple t there exists a weight w such that V(t, w) is at least as great as V(j, w) for every other N-tuple j?

The idea for an application of such a function is for devising a system in which every "applicant" (represented by an N-tuple) has at least one instance (represented by a weight) where they are a "best choice." In short, it would be useful in a "no losers" system.

y = 1=(x²+c) is a one-parameter family of solutions of the first-order Dierential Equation;
y+ 2xy² = 0. Find a solution of the first-order Initial Value Problem(IVP) consisting of
this differential equation and the given initial condition. Give the largest interval I over
which the solution is defined.
y(2) =1/3

I am a student of Computer Science and I'm just loving the story on Linear Algebra.

I wonder if there is a good indication of books regarding this subject?

Thank you in advance!

What is really a fractional derivative? When can we affirm it is? Shouldn't we require that they recover the classic when the order becomes integer?

The word fractional appears in a lot of contexts. It became like a fashion. It is clear that Grunwald-Letnikov, Riemann-Louville, Caputo, or Riesz are fractional derivatives, but it is clear for me that the so called fractional Fourier transform is not really fractional. And like this one there are others. Personally, I believe that the most correct way of going into the fractional derivative is the Grunwald-Letnikov, because all the others can be deduced from it. Besides it is the most suitable for numerical computations.

Does anyone have any recommendations for programs that evaluate the Lyapunove exponents for time delay dynamical systems?

Does anybody know some concrete applications of matrix factorizations in Computer Science?

Does anyone know how many different versions of the Non Negative Matrix factorization exist and what they are ? I did a Google search, but couldn't figure out any definitive answer.

Suppose {an}, {bn} are convergent sequences of real numbers such that an > 0 and bn > 0 for all n.

Suppose lim an = a and lim bn = b. Let cn = an/bn.

n -> oo

Then which of the following is/are correct?

1. {cn} converges if b > 0

2. {cn} converges only if a = 0

3. {cn} converges only if b > 0

4. lim sup cn = infinity if b = 0

I guess (1) is correct. and (2), (3) are not correct. Am I right? and What about option (4)? please explain.

How the no of generator relate with the order of a cyclic group?

I have a number of Lorentzian curves ("reference curves"), all having the same half width at half maximum, and I know each curve's location parameter. However, the reference curves are multiplied by various different factors, so most heights will differ. I would now like to calculate those factors from the sum over all curves.

I have already written code that performs the calculation; it uses the sum's value at the location of each curve's maximum, and each reference curve's value at this location. The method I'm employing is singular value decomposition, and it seems to work fine. I would, however, like to know whether this approach makes sense, and whether the task can really be solved in this way in all cases, or whether I might just have been looking at a few examples where it happened to work (that it doesn't work when curves have the same location parameter isn't a problem).

I am not a mathematician, so I don't know how to tell whether this seemingly sensible approach is actually misguided. Also, in case you've been wondering, I'm aiming to use it for working with NMR spectra (where I'll also have to deal with random noise, and non-Lorentzian curves), but I would like to check whether the theoretical approach isn't wrong-headed.

Thanks for your time!

Domination graph theory is the most popular topic for research.

I know parallel lines can't touch but what if they share all of the same points? Just wondering if there are any exceptions.

Few examples are FFT,wavelet ect

Can anybody suggest me the problems where research could be done in the field of Linear Integral equation.

Once you understand what PvsNP problem is actually all about, you might as well try and solve it.

In loose terms, the P vs. NP problem actually seeks an answer to this simply stated question:

"Is finding a solution to a math problem equally hard in comparison to verifying that it IS a solution ?"

Math guys usually "search" for a solution to their problem (e.g. solving some equation), but this can apply to "searching" any data set.

Imagine a program that searches for a solution to some equation. That program will most certainly consist of two major parts: a searching part (the solver) and a verifying part (the verifier). The solver tries to construct a solution by some rules and a verifier checks that it actually *is* a solution.

This solution constructing part is like when you do all sorts of manipulations (factoring, cancelling common terms, ...) to solve an equation, and this verifying part is more like when you plug in some values for your solution back to the original equation to check if both sides turn out equal.

The first part will usually take up much time, as finding a solution to some equations is sometimes hard, but once the right solution is constructed, the verifier will take only a fraction of that time to check if that actually IS a solution. The PvsNP asks if those two parts are actually the same thing, because it would be nice of course, that solving an equation is as easy as checking the result.

Another way to look at it, it's basically a question about searching trough (potentially large) sets of data. In that context the PvsNP asks this:

"Is there a systematic way of searching trough a large data set ?"

(a large data set means for example, a data set not completely searchable in the course of one persons lifetime, for example the whole Internet)

Of course, people have been trying to answer this for decades ever since the computer era started, but with no luck, in my opinion because of the way the final solution needs to be presented.

It is widely believed that P is not equal to NP, because otherwise it would have baffling implications for say cryptography and code breaking. As there is a huge number of potential passwords that one can make up, a positive answer to PvsNP means that a brute force search is not necessary when trying to guess someone's password and there is also a systematic way how to obtain it. On the other hand, if P is not equal to NP than it means that there is no such thing.

Also, in this digital age, when almost everything is stored on a computer (music, pictures, texts, ...) if P = NP is true then we could generate any piece of music, any picture, anything ... by means of a computer program that would solve P vs NP, we just "search" for it, provided we have a computer program that recognizes that something is "a piece of music".

Finally, the PvsNP can be restated in terms of creativity as: "Can creativity be effectively automated ?"

The hardest thing about solving the problem is actually proving that either case is true. There are of course up till now many false starts and dead ends, and people today that are still trying are trying to prove that in fact P does not equal NP. Richard Karp, one of the most renowned computer scientists once said that this problem will someday be solved (either way) by someone under thirty using a completely new method. So, until then, you might try and solve it for yourself.

I have three data sets,A,B and C.A is dependent on B and C therefore has (C*B number of data points).

I have these equations with parameters. I have to plot graph for numerical solutions

C= inv( Trans(A) * inv(B) * A )

A is a rectangular matrix, and B is a large square matrix.

Assume the integral of "z*h(z,p,q)" over all values of the scalar z is equal to that of "z*h(z,p)", where both of the scalar-valued h(.) functions respectively integrate to 1 taken over all values of z. So then, is this true iff h(z,p,q) = h(z,p) for all z, p, and q? (Bonus points if you can also let me know the same for discrete z.) Thanks in advance!

Here y can be assumed as a function of any independent variable ! I want the differentiate its present value of f(x(n)) with respect to its past value f(x(n-1)

The following list includes free math software and tools together with the corresponding descriptions and download sites.

Operating systems:

Scientific Linux: A linux distribution put together by Fermilab and CERN. Freely available from

Ubuntu Linux: A Linux distribution, easy to install and freely available from

Debian: Perhaps the best Linux distribution.

DesktopBSD:

A freeBSD distribution easy to use which can be tested through a live DVD. Freely available from

BSD: Several Unix distributions.

Applications for symbolic calculus.

wxMaxima:

Calculus with a graphic interface. Freely available from

Axiom: Similar to the preceding one.

Euler: id.

Scilab: id.

octave: id.

Gap: Computational discrete algebra,

R: Statistics

PSPP: Statistics

haskell: Pure and lazy functional programming language with an interpreter.

Astronomy:

Stellarium: Free astronomy appl.

Star charts: Free star charts PDF files.

Math graphics:

Gnuplot: To build any graphic in 2D or 3D. Freely available from

DISLIN: A graphical library, easy to use.

Word processors:

TexMacs: WYSIWYG editor with a graphical interface, by means of which one can type scientific texts, and export them in PDF, PS, HTML, LaTeX formats. Freely available from

Lyx: Similar to the preceding one.

Miktex: A complete LaTeX distribution for Windows. Freely available from

TexMaker: A LaTeX editor: Freely available from

TeXniccenter: Another powerful LaTeX editor for Windows OS.

Kile: Another LaTeX editor.

TexShop: A LaTeX distribution and editor for Mac OS X. Freely available from

Texlive: A LaTeX distribution for Linux and Unix OS'.

Open Office: A package similar to Microsoft Office:

I have prepared a paper on Graph theory but I don't know how or where to publish it.

I know here f '(1) =0. I found in some texts if f ' (x) = 0 for some x then we can't apply N-R method. Is there any other technique to find first approximation for x^3-3x+1 = 0 taking initial approximation as 1? Which is correct ? a) 1 b) 0.5 c) 1.5 d) 0

Notice, that if f:K --> M is an injective map which can be defined by a finite statement, then

for every y in img(f) there is an x in K satisfying the relation y = f(x), which can be regarded as a definition for y. Thus, either both x and y can be defined by finite statements, or no one of them can be defined finitely.

Moments Descriptors are invariant under RST ( Rotation, Scaling, Translation) in PR.

Is Moment Descriptor almost invariant under Optimization?

Is Moment Descriptor almost invariant under Guassian Noise?

Can the reduction of pixels to draw an optimal shape of a planar curve shape using mouse on computer affect recognition rate? If yes then upto which extent ?

May i know a book which gives a basic results or informations about Matrix theory?

For instance, pi = 3.141592... can be defined as the ratio between the length and the diameter of a circle. Any integer can be defined by a finite sequence of figures, and so on.

I would like to send my publications to this publisher. I would like to ask if someone has an experience with this publisher. Thank you!

Homotopy continuation method provide an useful aproach to find zeros of nonlinear equation systems in a globally convergent way. Homotopy methods transform a hard problem into a simpler one and solve it, then gradually deform this simpler problem into the original one. I usually solve equations from nonlinear circuits: diode, bipolar, MOS. Now, i want to solve other kind of equations with applications, specially if the equation is multivaluated. ¿Somebody want to collaborate with me?

The symbols we use in mathematics to form equations are just an aid in clearly forming an argument and communicationg it to others. We are clearly restricted when we use this formal language. If we could only cast out any mention of this language and symbols when doing mathematics, then we would be on the right track in truly understanding reality's ways.

The notion of quantity, form, change, space, shape, order, etc. are all independent of their symbolic representation. The language can easily change trough time, but these notions will not.

Computation as we know it, is merely a formal manipulation or transformation of symbols. It can be done by hand or by a computer. Either way, there is always a notion of a conciever and an executor present, when talking about computation. These two are usually one and the same, but I like to think about them as separate entities. The executor, follows a fixed set of rules to transform given string of symbols, that a conciever has conceived having some end goal in mind. The executor blindly follows these rules and eventually, (if he's in luck and didn't get stuck somewhere blindly following the rules),he will get a transformed string of symbols representing the final result.

And the conciever is the one that anticipates this result, again as a string of symbols.

So, when doing computation, the main assumption is that, when we manipulate symbols, we manipulate the notions that they represent. Just like in the primitive times, when people practiced magic, they believed that the symbols they use in their spells represent objects from the real world.

They believed that drawing these symbols in some special sequence will result in a spell being cast, and as a result something in the real world will change according to the spell's intention. So, in an amusing way, doing mathematics can be regarded as "doing magic", not in the real world, but in the world of ideas.

Computers process strings of symbols by following a fixed set of rules that we call a program. The conciever is the programmer, and the executor is of course the computer. The processing by a computer is usually done in a one-by-one

fashion, but is much faster that doing it by hand. Computers can be seen as manipulators of symbols, or executors of programs, but the acctual thing we are after is the "manipulated" idea after the computer has done millions and millions of manipulations on it (that would be too tedious to do by hand).

So "ideas" are the ones that we are after when doing computation, because we hope that this mechanical grinding away of symbols will tell us something new and interesting about reality and nature, although this point of view was refuted a hundred years ago by Godel's famous incompleteness theorems. These theorems show that there is definately something more to mathematics and computation than just "symbol grinding". Remarkably, Godel showed this using only using some basic facts from NUMBER THEORY, nothing fancy.

And what about nature and reality ?

What are nature's rules, and what "language" is used to set these rules ? Nature is the executor, but who is the conciever ? And what is the final result ? Is it LIFE maybe ?

The answers to these questions are certainly beyond human comprehension, but there is, as always a lot if speculation about it! But, when we finally find this out, only then we can make a significant progress in truly understanding this "manipulation of ideas" notion and and "reality's ways" in general that mathematicians are still desperately and vaguely trying to capture by the notion of "computation".

When I read about Fourier transform, there are several definitions about Fourier transform. It is because there are several conventions about it. Which kind of definition should I refer to? Because it is a little bit confusing.

Like hyperbolic and circular trigonometric functions can we able to generalize trigonometric ratios with respect to a general curve?

Anyone can help me in find solution to problems like

P * x + Q * y + ......... >= some constant

Capitals(P,Q) are constants while lower case letters(x,y) are variable

Problem states that we have to find solution to this problem while keeping solution minimized such that it should be greater than a given constant.

Given three vectors x,y,z., how do i plot the magnitude[sqrt(x^2+y^2+z^2)] and show it in 3D using matlab or mathematica?

If you have any other math package i can use and how-that would be great too.

What are the main differences between finsler spaces and riemann spaces

There are various implementations and variations of the LLL-algorithm, depending on the specific scope. Different "editions" have differet input variables and so on.. Has anyone experience of any of these implementations?

I need help to understand the computer science application of Algebra (rings, fields, groups, etc.)

I am at present in need of help with the mathematical package bifurcation XPPAUTO

Actually, I have 3 diff. eqns and when I apply XPP I get results, some of which I cannot interpret. If anyone is interested I can give the eqns etc.

Inverse matrix on PPU and on SPU using SIMD instructions.

This article will talk about how to convert some scalar code to SIMD code for the PPU and SPU using the inverse matrix as an example.

Most of the time in the video games, programmers are not doing a standard inverse matrix. It is too expensive. Instead, to inverse a matrix, they consider it as orthonormal and they just do a 3x3 transpose of the rotation part with a dot product for the translation. Sometimes the full inverse algorithm is necessary.

The main goal is to be able to do it as fast as possible. This is why the code should use SIMD instructions as much as possible.

A vector is an instruction operand containing a set of data elements packed into a one-dimensional array. The elements can be fixed-point or floating-point values. Most Vector/SIMD Multimedia Extension and SPU instructions operate on vector operands. Vectors are also called Single-Instruction, Multiple-Data (SIMD) operands, or packed operands.

SIMD processing exploits data-level parallelism. Data-level parallelism means that the operations required to transform a set of vector elements can be performed on all elements of the vector at the same time. That is, a single instruction can be applied to multiple data elements in parallel.

I m intrested in the solution of nonlinear hyperbolic partial differential equations with various techniques such as lie group theoritic method, vandyke and gutmaan technique etc,