Questions related to Mathematics
Austrian-born mathematician, logician, and philosopher Kurt Gödel created in 1931 one of the most stunning intellectual achievements in history. His shocking incompleteness theorems, published when he was just 25, proved that within any axiomatic mathematical system there are propositions that cannot be proved or disproved from the axioms within the system. Such a system cannot be both complete and consistent.
The understanding of Gödel’s proof requires advanced knowledge of symbolic logic, as well as Hilbert and Peano mathematics. Hilbert’s Program was a proposal by German mathematician David Hilbert to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic. Gödel’s 1931 paper proved that Hilbert’s Program is unattainable.
The book Gödel’s Proof by Ernest Nagel and James Newman provides a readable and accessible explanation of the main ideas and broad implications of Gödel's discovery.
Mathematicians, scholars and non-specialist readers are invited to offer their interpretations of Gödel's theory.
When I observed mathematics education in preservice teachers at my university, I found that What students learn in mathematics classes does not match what is needed in school mathematics classes. That led to my initiation to do design research, a Hypothetical Learning Trajectory that contains a sequence of learning processes related to elementary school mathematics learning. I consider starting with observations in elementary math class to identify misconceptions, then jointly reviewing issues on campus, conducting literature reviews, designing lessons to address identified problems, implementing them in class and evaluating the results. The final result is a Local Instructional Theory on Learning about Teaching Mathematics for prospective teachers to overcome misconceptions that occur in elementary mathematics classes. I need so many suggestions to do this research.
According to the resources; there is used to algorithm in HOMER sotware, grid search algorithm and derivative-free algorithm.
So how can I find the mathematical formulas of these algorithm ?
Please if you know the way or source help me
Hello! I have an exercise to do and I am struggling with it for a week ago. I have a 2D flow with the initial velocity at the inlet=0.01m/s. And I want to calculate the numerical results of the velocity which is the function of x and y direction (U(x,y)) in the domain of the water flow. For example if I want to calculate the velocity at the distance x1 and x2 (as in the figure) in the function of x and y., how can I calculate it? Please note that it is incompressible flow. It is fully developed after it passed the entrance length. The height of the domain is 0.02 m and the length is 1 m. Please help me! You can see the following figure. Thank You in advance! I will appreciate your help !
To fill the magic square you must place the numbers from 1 to nine in each box and place them in such a way that when adding the numbers vertically, horizontally and diagonally they add up to 15.
According to the resources; there is two algorithm used in HOMER sotware optimization process; grid search algorithm and derivative-free algorithm.
So how can I find the mathematical formulas of these algorithm ?
Please if you know the way or source help me
Please look at the mathematical construction of theta and zeta functions in the representation of a winding sphere. In this paper, based on the representation of the winding of a sphere, we construct a proof of the Riemann hypothesis of nontrivial zeros. Could you check this proof?
Preprint On the winding of a sphere
Preprint О намотке сферы
The temptation in all the remodeling of the study plans to train teachers that I have experienced, faced with a new strong point of interest, has been to introduce a subject that caters for it. Thus, environmental education, for example, has been the curriculum of Spain for more than 25 years.
Now the SDGs are the focus of attention, and the university is committed to developing them. the challenge is to do it from the established curricular subjects
Educational mathematics is developed in three or four subjects of each training curriculum, and has to embrace achieving the teacher's competence development to teach school mathematics and also, the purpose of working on the SDG, how can we do it?
using BTE how do we get mobilities and conductivity ,more calculation(mathematics) oriented answers would be helpful.
In classical times there was a concept of pure science that was produced entirely in the intellect.
More recently sciences were developed by testing the intellectual product with empirical data. These sciences are not regarded as being pure.
Mathematics, often regarded as pure science, has for most of history been based on postulates of geometry that could not be proven. Then came relativity and other geometries. In the past century there was considerable effort to reformulate mathematics on a firmer basis of conditional sets. Math is now regarded as being somewhat more pure than before, while producing two generations of graduating students in some countries who are not able to do simple arithmetic.
Fortunately I had some excellent teachers who explained the two systems and why they were both needed. Other teachers displayed the Gödel's incompleteness theorems.
In academic settings there seems to be a difference of opinions about whether or not math is a science, and whether or not it is pure.
Is Mathematics A Science?
Let P be a polynomial whose coefficients are equal to ±1. Prove that 1/2 < |z0| < 2 for any root z0 of P.
Although Bashkar II, in his Siddhantha Shiromani, critisized a Buddhist school of astrology who held the Earth is moving, I have not seen any mention about texts on Buddhist astrological or mathematical tradition in recent studies. Vedic and Jain sources are well known.
Are there any primary textual resources on Buddhist astrology/mathematics?
Physical reality can be observed. At least part of the structure and behavior of physical reality is perceivable. Humans can communicate about these experiences. Curious humans want to comprehend these perceptions. Humans have designed linguistic tools to be able to communicate about the perceived structure and behavior of physical reality and with these tools, they have constructed structures and models of mechanisms that might explain the perceived structures and mechanisms that physical reality exposes. Some of these structures and models of mechanisms seem to be successful. People discuss the success of these approaches and call this activity exact science. Other humans discuss this activity and call themselves philosophers. Humans are interested in the structure and mechanisms of physical reality because this knowledge helps them survive as individuals and as communities. Part of the exact sciences is formed by mathematics. Mathematics contains structures and models of mechanisms that are not directly derived from perceptions of physical reality. These concepts are derived from abstract foundations. Examples are empty space and point-like objects. Scientists use these concepts to construct vector spaces, number systems, and coordinate systems. The scientists apply these higher-level concepts to construct a model of their living space. The philosophers will immediately indicate that it is impossible to prove that these models are correct. However, these models feature structure and behavior. If the structure and behavior of the models agree more with the perceived models and behavior, then there is a larger chance that the model fits reality. Since reality appears to be very complicated, little chance exists that good correspondence will ever end the dispute.
One of the aspects of the dispute concerns what the best inroad will be for comprehending most of the structure and the mechanisms of physical reality. That is the background of the posed question.
US Fed's balanced sheet has recently doubled up from $4T to $8T (see the monthly chart) – not to mention it used to be $800B around 2008. It seems like to me, that is highly inflating the housing markets, pushing the housing prices higher as well as the inflation rates higher – regardless of taking economic productions much into consideration. Fed refers to the situation as "transitory".
I'm sure that there should be solid mathematical rations behind it. Does this parabolic Quantitative Easing approach a mathematically and/or economically sound strategy, considering the current economic situations? Any other alternative strategy? Any upsides and downsides?
A bit outdated visual components of Fed's balanced sheet:
Dear colleagues, I would like to know if there is a mathematical equation or model to calculate the Soxhlet number of cycles by varying the following parameters:
- the type of solvent
- the volume of the extractor
- the temperature of the solvent heating
if yes how can we apply it? and how much uncertainty it has?
thank you so much
such as Fourier , Hilbert or Laplace transforms in order to analysis and studies in the field of music acoustics.
I need to find a method of proving a mathematical equation that exist in attached file.
This mathematical equation is about Bessel function.
I will be thankful to hear your helps and advises.
Dimensional rules for Homogeneous Diophantine equations
A Diophantine equation is an algebraic equation with several unknowns and integer coefficients. Those are equations whose solutions must be whole numbers. The only solutions of interest are integers
In mathematics and physics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it or the minimum number of sides to build a physical area or volume.
To build a geometrical rectangle (square) we require 2 orthogonal sides (x, y are independent indeterminates) area = x*y (or area = x*x)
To compute a numerical square (square number) we require 2 numbers p and q coprime with gcd(p,q)=1 or independent numbers.
Let we introduce a dimension for nature of numbers like I (for integer), R (for real), C (for complex) and so on.
n^2 = p^2 +q^2 ; 5^2=3^2 +4^2
Dimensional analysis gives for n^2= p^2 +q^2
I*I =I*I +I*I ; The sum of square might be equal to a square in some conditions.
Then the equation n^2 = p^2 +q^2 may have integer solutions.
A dimension is anything we can independently adjust, it is based on some geometric or algebraic definition.
In any correct mathematical equation representing the relation between physical quantities or mathematical objects (length, area, volume,….,.simplex), the dimensions of all terms must be the same on both sides. Terms separated by ‘+’ or ‘–’ must have the same dimensions.
The degree of freedom is the number of free components : how many components need to be known before the « objects » is fully determined?.
- a point in the plane has two degrees of freedom : its two coordinates;
- a square has two degrees of freedom : its two sides ( length and breadth).
- A point in 3-dimensional space has three degrees of freedom because three coordinates are needed to determine the position of the point.
Law of Degree of Freedom
The degree of freedom of a Diophantine equation is the number of parameters of each side that may vary independently. The degree of freedom must be the same on both sides.
- The equation n^2= p^2 +q^2
LHS is a square number with two degrees of freedom (its two multipliers or components) .
RHS is a sum of two squares with two degrees of freedom p^2 and q^2 which vary independently.
The degree of freedom is defined as the dimension of a mathematical object. When the degree of freedom is used instead of dimension, this usually means that the object is only implicitly defined.
Attempt to answer to Hilbert 10th problem
Hilbert's Tenth Problem Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.
The problem was solved in 1970 by Yuri Matiyasevichwho in his doctoral dissertation proved that a general algorithm for solving all Diophantine equations cannot exist.
I think the concept of dimensional analysis is an attempt to answer Hilbert 10th problem related to homogeneous Diophantine equations . It works very well and allows to know if a homogeneous Diophantine equation has integer solutions but it remains to calculate these solutions using modular arithmetic or algebraic method…. The most important thing is to answer Yes/No and affirm that the integer solutions exist or not.
I want to center (subtract the mean) my data in order to lower my intercept (as close as possible to zero). However, as I have categorical data as well, which I don't know how I can possibly center, I can't seem to lower my intercept.
And the thing is that my dummy variable drive up the intercept... So I don't know to fix the issue.
When I enter only continuous variables that are centered, the intercept is close to zero.
I thank you in advance?
I am combining fish abundance and biomass data collected from five locations which have utilised different transect dimensions:
15 x 4 m (60 m2; from one location)
30 x 5 m (150 m2; from three locations)
50 x 4 m (200 m2; from one location)
When pooling data collected from transects of differing sizes, should you standardise your data down (to the smallest dimensions) or up (to the largest)? Or in this case, perhaps standardise according to the most commonly used dimensions (30 x 5 m/150 m2)?
To me it seems as though there would be no difference, but I was wondering if there was some sort of mathematical explanation for why one way might be preferable over the others?
I hope you are all doing well. I have a question to all of you. I love to teach and so far I have been teaching everything from international business, marketing, management to organisational psychology. I feel ambivalent towards this however.I love to teach and Im curious in my nature and what better way to learn new things than to teach? However I also feel that if I continue being a generalist It might also be bad since I risk loosing touch with my main specialisation.
What are your thoughts? Your experiences?
My speciality is this :
Best wishes Henrik and professor Hugo 3 months old who keeps me company when Im trying to work :-)
Ps stay safe.
I have seen that some equation sets have an explicit way of defining the substrate consumption whereas others have an implicit way. I would like to know how to convert the former into the latter.
For instance, in the attached file, it can be seen that there are two ways to describe the growth of bacteria.
In the first (explicit), the presence of a half-saturation constant suggests that the model is a Michaelis-Menten case. The second is straight a logistic model.
I was wondering if I can convert the first into the second, that is explicit to implicit.
Considering that the Monod term says that
μ = μᵢ[S/(S + K)]
then I can write
Ṅ = μN * logistic term
My question is: Would this modification be mathematically and biologically sound?
If yes, how can I estimate the carrying capacity? For instance, if I know the amount of limiting substance, can I estimate the number of cells that a system can sustain?
I'm working on constructing a field based on finite undirected and unlabeled graphs. I've selected graph join and graph Cartesian product. But I'm trying to figure out if the latter distributes over the former (or simple graph union).
I want to know how we can relate magnetostriction with Applied magnetic field mathematically? Or if there is any way to derive it using mechanical equations?
The field equations originally starting as Maxwell distributions graphs configure a velocity variable. Understanding that in connection to wave theory requires a mathematical transformation of velocity variable into wave function parameter. This may help mathematically to define particle wave quantum relativity quantitatively. Legendre transform theoretically will have a way to provide link among functionality of parameter-variable systems.
There is a contradiction between the natural width of the energy transition, which is determined by the lifetime at the corresponding energy level and the spectral line width of the radiation line, which is determined by the duration of the wave train.
For example: For the Mössbauer transition, whose lifetime is of the order of 2 years, while the interaction time at the receiving end is about 10^(-10) sec.
Mathematically, this interaction is expressed by the Feynman diagram of the electron - electron interaction, which integrates over the internal photon line, which, together with the delta functions of the vertex parts, limits the photon spectrum.
By the way, the same paradox applies to any other type of collision.
So, is exist (really) the electromagnetic field?
When we derive the formula for length contraction, we use the direct Lorentz transformation. But for solving the formula for time dilation, we use the inverse transformation. Why is that so?
A question from far away worlds...Life Sciences (cricket) and an hyperbolic function from history of mathematics are trivially juxtaposed to encourage whatever reply.
[The antennas of the cricket roughly complete the less frequently represented branches of the hyperbola in the photographed virtual third and fourth quadrant of the Cartesian coordinate system].
Here we discuss about one of the famous unsolved problems in mathematics, the Riemann hypothesis. We construct a vision from a student about this hypothesis, we ask a questions maybe it will give a help for researchers and scientist.
They say the best educators are able to make a complex subject easily understood by a child. Perhaps this is a challenge to us all.
Quarternion mathematics can be quite complex to wrap your head around. For educators out there how are you explaining this to your students from high school.
Is a pictorial method the most appropriate under these circumstances.
I have read that time is defined in terms of entropy (entropy of the universe maybe? I'm not sure). But entropy is a macroscopic quantity which makes time a macroscopic quantity. Yet time is a parameter in Schrodinger's equation. So we have a macroscopic quantity appearing in a microscopic equation. That makes me think that this microscopic equation is really a statistics equation. It produces deterministic solutions but maybe these deterministic solutions are just mathematical procedures for statistical calculations. That would be consistent with the fact that these solutions actually are used for statistics calculations. Does it makes sense for a macroscopic quantity (time) to appear in a microscopic equation, unless it is a statistics equation? Or, am I mistaken in calling time macroscopic? My primary question is whether Schrodinger's equation can be derived from some set of statistical postulates. The question is not whether the equation was historically obtained from statistical arguments (we already know it wasn't), the question is whether the same equation can be derived from statistical arguments. What Schrodinger was thinking at the time of producing his equation is not relevant to this question.
Mathematics is the basis of exact sciences. The development of mathematics consists in the fact that, among others, new phenomena of the surrounding world, which until recently were only described in the humanistic perspective, are also interpreted in mathematical terms.
However, is it possible to write down the essence of artistic creativity in mathematical models and create a pattern model for creating works of art, creative solutions and innovative inventions? If that was possible, then artificial intelligence could be programmed to create works of art, creative solutions and innovative inventions. Will it be possible in the future?
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
Will mathematics help to improve artificial intelligence so that it will achieve human qualities of artistic creativity and innovation?
I invite you to the discussion
what are other possible ways to convert a CW laser to a pulsed laser. Please give all the theoretical and mathematical details.
I want to know that how to find theta maximum when i am using GPS and sphere and surface type distribution. and using cosine distribution.
then i want to know that is there any mathematical calculation to find theta maximum for biasing.
like in the Geant4/examples/advanced/radio-protection theta maximum is 0.003 deg. how they got this using the surface area of source or object. like how can i get for my object ?
any type of help will be appreciated.
Thanks in advance !
STEM fue el tema principal de la conferencia internacional ASTE 2019, con al menos 8 pósteres, 27 presentaciones orales y 3 talleres que promovieron las aulas STEM, la instrucción/enseñanza STEM, las lecciones STEM, los campamentos de verano STEM, los clubes STEM y las escuelas STEM sin proporcionar una conceptualización o definición operativa de lo que es STEM. Algunas presentaciones defendían la integración de las disciplinas, pero el ejemplo proporcionado fue principalmente prácticas "indagatorias" y de "diseño de ingeniería" que de hecho no diferían del tipo de actividades en el aula hands-on/minds-off mal conceptualizadas y epistemológicamente incongruentes.
Por lo tanto, vale la pena considerar:
(1) ¿Por qué lo llamamos STEM si no difiere de las prácticas aplicadas durante décadas (por ejemplo, indagación, actividades hands-on)?
(2) ¿Qué beneficios (si los hubiere) puede aportar esta mentalidad/tendencia de STEMinificación a la educación científica y su investigación?
- An understanding of theories about how people learn, and the ability to apply these theories in teaching mathematics; It is one of the primary requirements for effective teaching of mathematics, and a large number of scientists have studied mental development and the nature of learning in different ways, and this resulted in various theories of learning.
Question about the loss of hyperbolicity in nonlinear PDE: when complex eigenvalues appear, what is the effect on flow? I understand that we do not have general results on existence in this case, but is it only the mathematical tools that are lacking where can we show physical phenomena of instability?
Different longitudinal studies, such as the Perry Preschool Project or the Abecedarian Project, have shown that early and effective exposure to different content in ECE students, as science or maths, cause positive effects in children later development not only in academical sphere, but also in personal, social and economic ambits.
I attached an image file of excel data in which the problem is to predict the value of u by using any mathematical method.so, please share me the procedure or method to be followed to solve the problem.thank you in advance.
Although there is a reasonable dose of physical and geometrical interpretation of the mathematical formulas and equations, more work is needed to bridge the gap between the theory of mathematics and its applications. For example, let us look to the coplanarity of three vectors, which are typically expressed as a dot and cross products between the three vectors. Based on practical feed back, when the explanation of the coplanarity is broken down into cross product first to generate a perpendicular vector and then then do the do product with the third vector, which is now expressed an orthogonality relationship between two vectors a deeper understanding can be achieved by the students. Indeed more class work is required to take the students through this journey using a step wise approach. I do believe that the teaching of mathematics to engineering students should go to a deeper physical interpretation to facilitate its comprehension and understanding. Your comments are highly appreciated.
Hi all, I am looking for a patterning assessment to use as a pre- and post-test alongside a patterning intervention in mathematics for 7 to 9 year old children.
I know of PASA by JT Mulligan, are there any others?
Thanks in advance for your help!
The gyroscope is quoted as a mathematical gyroscope, that is, the intersecting lines of the equator and one meridian. The permissible movements of our mathematical gyroscope are the proper rotations of the equator and meridian circles and the rotation of the entire structure around the axis passing through the intersection points of the two circles. Since the proper rotations of the circles are specified by the group of diagonal matrices with complex units, and the rotation of the entire structure is specified by the group of special orthogonal matrices, it is expected that the group of motions of the mathematical gyroscope generated by these groups is equal to the unitary group U(2).
It is clear that this construction has a generalization in the form of a mathematical gyroscope of the n-dimensional sphere, which generates the group U(n). Does this construction find application in phenomenological theories of gauge symmetries?
I am looking at glacier change in the Andes. I am using SPSS for my statistics. I have been provided with discharge data also, for which I would need to look for a relationship between glacier area, temperature, and river flow over a specific time period of 7 years. Temperature is given in daily minimum/maximums, river flow is monthly, and area change is yearly. Thanks :)
More specifically I am looking to analyze and plot non-michaelian kinetics based on functional parameters from BRENDA or similar sites. I do not know how to find the mathematical function describing such kinetics and have not yet been able to find it in the literature.
I would greatly appreciate any help from the community!
Dear colleagues, what do you think of a possible mathematical analogy between a one-sided surface and a Bose-Einstein condensate?
Hi everyone! Greetings from Munich!
It appears in my mediation analysis, that X is negatively related to M, and M is positively related to Y. Also, i find a significant negative effect of X on Y through M. But since M is determined as a perceived benefit, i am currently struggling with the interpretation of this indirect effect.
Mathematically, of course, this indirect effect result makes sense since "- x + = -", but can i interpret this by saying the benefit is overridden or is it rather that the benefit "backfires" on Y and thus a negative indirect is found?
Many thanks in advance!
Pursuing research in mathematical science through physical problem-solving is an art. More students of science and engineering can be attracted to learn mathematics in the process and apply the mathematical tools in solving real-world problems.
Nonlinear mixed-effects models (may) consider data below the limit of quantification (BLQ) in parameter estimation. However, an evaluation of the goodness-of-fit plots (observations vs predictions in particular, using spline interpolation), displays a strong trend (of spline interpolation, but not of the data) in the region of censored data, as if the model disregarded BLQ data and the data were the lower limit of quantification itself, as structured in the database. I believe that the database is structured correctly and that the model considered the censored interval. Apparently this plot is the only one to exhibit this behavior.
Is spline interpolation adequately representing the competence/capability of the final model in this case? How to handle this situation?
I'm looking for referents / references about how/if learning mathematic language prior to grammar impacts comprehension ad interpretation abilities.
Another route but similar purpose, is to understand if enhancing adults' skills on mathematical language understanding can have any impact on the development of their speaking and writing skills.
Does anyone know of any studies, papers around this?
Let T denote the circle group, that is, the multiplicative group of all complex numbers with absolute value 1. Let f : T → T be a (sequentially) continuous map, and such that f(z2 ) = f2 (z) for all z ∈ T. Then there is an integer k such that f(z) = zk for all z ∈ T.
Is there any mathematical equation to determine the appropriate number of hidden layers for a sequential model?
I shared the picture of three parameters 1.Change in Temperature, 2. Change in Relative Humidity, 3. Change in Pressure and respective error value for that.
From the attached data(picture and excel file attached), I need to find the Error value for different input parameter.
1.Change in Temperature = 1°C
2. Change in Relative Humidity = 1%
3. Change in Pressure = 1mbar
What is the error value?
1.Change in Temperature = 2°C
2. Change in Relative Humidity = 2%
3. Change in Pressure = 2mbar
What is the error value?
1.Change in Temperature = 4°C
2. Change in Relative Humidity = 3%
3. Change in Pressure = 2mbar
What is the error value?
Is it possible to find the error value by mathematics. Please tell the way to calculate using calculator or python programming.
In literature two formulae are generally used for evaluating Gravimmetric power density (Pd)
1. Pd (W/Kg) = Energy density (Whr/kg) x 3600/ discharge time (s) and
2. Pd (W/Kg) = [106 x V2]/[4 x ESR (Ohm) x mass (mg)]
But it has been found that each equation gives different result for any known value of current density. What may be the reason for it? Which equation is to be used for determining the power density of two electrode supercapacitor?
is there a deeper fundamental property of neutrino oscillations? How does it work at the field level? is there any advanced mathematical connection or is only a physical fact?
I designed and made a rotary atomizer with the help of 3D printing. Now, in order to evaluate its performance, I want to measure the diameter of the produced droplets. According to mathematical calculations and theory, the diameter of the droplets is about 100 microns and I want to measure the diameter experimentally.
Please help me in this way how to experimentally measure the diameter of a drop that is produced continuously.
Maybe a stupid question, but I forgot how to calculate such things. Let's say I have a group of 200 elements (genes). I want to divide this group into n subgroups, each containing 10 elements. However, each element should be present in more than one subgroup. Is there a cool formula to calculate n depending on the number of times each element should be present in independet subgroups?
I'm solving differential equation.
u'(x) = a11(x)*u(x) + a12(x)*v(x)
v'(x) = a21(x)*u(x) + a22(x)*v(x)
or simply, U'=AU.
Given A(or, a11/a12/a21/a22) as a function of x, i want general solution for u & v.
Solution for constant A is already known, using matrix-exponential.
But this case, this is for given matrix A as a function!!!, and I found out that the same approach is not valid by my hand.
Hope any keyword or references to go further....please.
I am conducting research about Students' Perceptions on Face-to-Face and Online Mathematics Learning. One of the statements of the problem in this study is to determine if there is a significant difference on the students' perceptions between face-to-face and online mathematics learning. I will be using the Independent T-test in hypothesis testing. Now, I am struggling on what data should I use especially for the mean and n. My survey questionnaire consists of 20 items (10 items for perceptions on face-to-face learning and 10 items for online learning). I already gathered the data and calculated the weighted as well as the grand mean of both learning mode. Can I use the grand weighted mean as mean for hypothesis testing using Independent T-test? What could be the value I use for n? I have 84 respondents. If I use the weighted mean, the n would be 10 because it would be based from the items (each learning mode) in survey questionnaire. Please help me and thank you for responding!
This is my very first encounter with functional equation of this kind, and methods of series solution and differential equations are of not much help. The solution to this problem, or at least the mathematical prerequisites to understand solution to this problem is asked.
This function is asymptotically zero at both +_ infinity, positive otherwise, and has a peak near zero.
If I am correct, then this is the frequency distribution of numbers fed to the sequence xn+1=a ln (xn2) as long as the sequence generated is chaotic and sufficiently large in number (value of a is usually limited to 0.2 to 1.3, Positive and negative signs of a are essentially immaterial except for the sequence is negated after first term) .The sequence is initiated or seeded with number roughly as the same order of magnitude of 1 in both positive and negative side, except for the values that eventually lead to zero or infinity in the sequence. The sequence is allowed to proceed, and frequency distribution of numbers in the sequence are noted, from which a continuous probability distribution may be numerically guessed but not analytically found. The expression to find out formula of the continuous probability distribution comes to me from the following reasoning
- Suppose, the probability distribution is given by y=f(x). Now, if I consider a "dx" (infinitesimally) thin strip around x, then I come up with f(x) dx fraction of all points required to construct the probability distribution. When this fraction of all points are passed through yet another sequence of transformation through the recurrence xn+1 =a ln (xn)2 , the fraction of points involved must be unchanged. That is, when x is substituted with a ln x2 , the infinitesimal strip area, which changes to f( a ln x^2) d (a ln x^2), must be numerically equal to f(x) dx, thus the functional equation is postulated
- I am not entirely sure about this reasoning, and experts are welcome to point out my fault of reasoning and show the correct equation , if I am mistaken.
Please see my related question https://www.researchgate.net/post/Can-you-figure-out-Chaos-of-the-recurrence-x-n-1lnx-n2 for further details.
Hi everyone. recently I designed a customized semantic segmentation network with 31 layers and SGDM optimation to segment plant leaf regions from complicated backgrounds. can anyone help me how to explain this with mathematical expressions using image processing. thank you
I got confused when I plotted the graph of -(x^2 - x)^(2/3). the graph shows the function achieves its maxima at x =0 and x =1 but when we follow the procedure of derivatives then we get x = 0.5. Please help me in this.
I have the x,y data of two curves (solid red and blue in the attached image) and I want to find out the envelope function of the two (dashed magenta curve in the attached image). Is there a way to do it in Origin or Igor? Or in any other mathematical software for instance?
I want to integrate two carbon fiber materials together and want to model it mathematically considering the processes of joining the materials and the possible stress around the joints due to external loads.
I will appreciate if someone can recommend and the numerical methods which one is better suited for this purpose.
Thank you all in advance
In the recent paper which has been exhibited in the 51th Annual Iranian Mathematics Conference entitled "Notes on maximal subrings of rings of continuous functions" we give some
properties of maximal subrings of some classes of subrings of C(X). However, we could not answer the following two important questions in this context.
1. Is every maximal subring of C(X) unit-free (i.e., whenever R is a maximal subring of C(X) and
f is an element of R with empty zero-set, then f is a unit of R)?
2. Is every maximal subring of C(X) uniformly closed (i.e., closed under uniform topology on C(X))?
I would be very delighted if you could let me your opinion about any ideas towards approaching the answers of these questions.
I recently saw a question regarding the relationship between language and mathematical learning, but am interested in learning more about the opposite.
Can anyone recommend relevant readings that explore the relationship between mathematical ability/maths learning and language acquisition. I am primarily interested in second/foreign language acquisition, but also interest in first language acquisition and the relation to mathematics.
(This question was prompted by something I read in one of Georgette Yakman's papers on STEAM learning (2008, I believe), which suggested something along the lines that understanding of mathematics was integral to understanding language, and am interested in learning more about the topic).
Can you just assume a situation with no figures or data to back it ? Is it reasonable?
#Logic #mathematics #data
As it is well known, Linked Opend Data (LOD) and computational ontologies have great success in the fields of Life Sciences (Biology, Medicine, etc). See e.g. the big LS-cluster at <https://lod-cloud.net/>.
However, I wonder why mathematics are – in comparison – covered only sparsely by ontologies or LOD.
Indicators (to the best of my current knowledge):
- https://lod-cloud.net/datasets?search=mathematics retrieves only one result. This links to http://msc2010.org/mscwork/ which seems outdated and contains several broken (404) links.
- Since http://ksl-web.stanford.edu/knowledge-sharing/papers/engmath.html (Gruber and Olsen) there seems to be no attempt for ontological modelling of mathematics as a whole (or at least a significant portion of it).
- https://www.ebi.ac.uk/ols/ontologies gives only one hit for "math" (in browser search)
- there is no "math*" tag on https://lov.linkeddata.es/dataset/lov/vocabs?&tag_limit=0 (but there is "biology" or "geography" or "geometry")
Probably there is some (machine-processable) formalization of mathematical knowledge but it seems almost disconnected from the "semantic web" and LOD-bubble.
- Why is this?
- Should this be changed?
- If 2., how?
I need to write a MATLAB code that has the ability to process a GIS image in order to extract the coordinates of the grid points within the red region (R) and that are at least distance "d" from its boundary. Each point in the R is given a weight w1 (attached figure). The same procedure is to be made for the green region (G) but w2 is the weight of any point in G. The gathered data are saved in a matrix formed of three rows: row 1 contains the abscissa, row 2 contains the ordinate, and row 3 the weight.
I am looking forward to getting your suggestions...thanks in advance.
Does anybody know about an mathematical optimization model which combines vendor managed inventory with fixed lot-sizes for production? That is, a model that performs a simultaneous production and delivery (transportation) planning with respect to production and transportation lot sizes.
So the keyfigures should be fixed lot-sizes for production, transportation assets, capacities and costs from vendor to VMI-stock, known demand on customer (retailer) side and inventory restrictions for the VMI-stock.
It will be also helpful if you just know about any paper or similiar which has recognized this problem.
In daily life single period and multi period inventory system is very necessary things. When the selling period is fixed that is we cant sell things outside that fixed time then it is called may be single period. Lets talk about it what is the actual definition.
We know many largest numbers as googol number(=10^100), googolplex number(=10^googol), and other unimaginable large numbers.
What is the largest known number that is the result of solving a problem in physics or mathematics?
Are they really practical or just based on conjecture?
Most plagiarism checkers don’t work because they cannot check equations and theorems which are the fundamental core of a mathematics article.