Freebasic 1.20.0 Development
Re: Freebasic 1.20.0 Development
My bad. I got the lastest changes but didn't apply them.......
So confirmed now.
So confirmed now.
Re: Freebasic 1.20.0 Development
Xusinboy Bekchanov wrote: ↑Apr 05, 2024 14:57 Hello, when compiling this example using compiler 1.20.0 dated 04/01/2024, it gives the error:
Yes, thank-you.
It is related to non-indexed arrays. This change was missed in PUT quirk handling and no tests in the test-suite yet exist.
PUT has different rtlib support functions depending on the argument type fb_FilePut, fb_FilePutArray, fb_FilePutArrayLarge, plus some others; similar for GET. The wrong rtlib support function is being selected resulting in mismatched argument types.
For interest:
- "Non-indexed array" refers to the form 'array()`, though sometimes the parentheses are optional (e.g. 'erase array')
- fbc does not have have an explicit 'array datatype'- i.e. 'typeof(array)' is the datatype of the array element. So determination that some symbol name is an array requires examination of the symbol's extra (internal symbol database) data rather than the symbol's data type.
- fbc's determination of 'symbol is an array' was almost entirely accomplished by the parser, which requires many checks throughout the compiler: is next 2 tokens '(' and ')' ? If yes, then it is an array.
- The changes in fbc-1.20.0 now handle 'array()` as a kind of expression within the AST rather than only while parsing. This change should allow simplifying the fbc codebase, fixing some old bugs, and adding some new features with respect to arrays.
I appreciate the report. I have a fix for the problem ... just need to add tests to the test-suite.
Re: Freebasic 1.20.0 Development
I do not see a problem with Get#.coderJeff wrote: ↑Apr 05, 2024 18:53 It is related to non-indexed arrays. This change was missed in PUT quirk handling and no tests in the test-suite yet exist.
PUT has different rtlib support functions depending on the argument type fb_FilePut, fb_FilePutArray, fb_FilePutArrayLarge, plus some others; similar for GET. The wrong rtlib support function is being selected resulting in mismatched argument types.
Re: Freebasic 1.20.0 Development
@fxm as no daily build you can test with my own compilation (64bit only)
https://users.freebasic-portal.de/sarg/fb64.zip
https://users.freebasic-portal.de/sarg/fbc64.zip
Edit thank you fxm.
Doing thing too fast before homework in the garden under a nice sun
https://users.freebasic-portal.de/sarg/fbc64.zip
Edit thank you fxm.
Doing thing too fast before homework in the garden under a nice sun
Last edited by fxm on Apr 07, 2024 9:09, edited 1 time in total.
Reason: Typo corrected.
Reason: Typo corrected.
Re: Freebasic 1.20.0 Development
With the latest fixes, Put# now works with arrays again (thanks).
In the south perhaps?
-
- Posts: 792
- Joined: Jul 26, 2018 18:28
Re: Freebasic 1.20.0 Development
Hello, this problem has been resolved with the new compiler dated 07.04.2024, thank you.Xusinboy Bekchanov wrote: ↑Apr 05, 2024 14:57 Hello, when compiling this example using compiler 1.20.0 dated 04/01/2024, it gives the error:Code: Select all
#include once "windows.bi" Private Function GetRed(FColor As Long) As Integer Return CUInt(FColor) And 255 End Function Private Function GetGreen(FColor As Long) As Integer Return CUInt(FColor) Shr 8 And 255 End Function Private Function GetBlue(FColor As Long) As Integer Return CUInt(FColor) Shr 16 And 255 End Function Type RGB3 Field = 1 G As Byte B As Byte R As Byte End Type Type BitmapStruct Field = 1 Identifier As WORD FileSize As DWORD Reserved0 As DWORD bmpDataOffset As DWORD bmpHeaderSize As DWORD bmpWidth As DWORD bmpHeight As DWORD Planes As WORD BitsPerPixel As WORD Compression As DWORD bmpDataSize As DWORD HResolution As DWORD VResolution As DWORD Colors As DWORD ImportantColors As DWORD End Type Static As BitmapStruct BM Dim As Integer F, x, y, Clr, Count = 0 Dim As HDC FDevice Dim As Integer FWidth = 50, FHeight = 50 ReDim PixelData(FWidth * FHeight - 1) As RGB3 For y = FHeight-1 To 0 Step -1 For x = 0 To FWidth - 1 Clr = GetPixel(FDevice,x,y) PixelData(Count).G = GetGreen(Clr) PixelData(Count).R = GetRed(Clr) PixelData(Count).B = GetBlue(Clr) Count += 1 Next x Next y BM.Identifier = 66 + 77 * 256 BM.Reserved0 = 0 BM.bmpDataOffset = 54 BM.bmpHeaderSize = 40 BM.Planes = 1 BM.BitsPerPixel = 24 BM.Compression = 0 BM.HResolution = 0 BM.VResolution = 0 BM.Colors = 0 BM.ImportantColors = 0 BM.bmpWidth = FWidth BM.bmpHeight = FHeight BM.bmpDataSize = FWidth * FHeight * 3 BM.FileSize = BM.bmpDataOffset + BM.bmpDataSize F = FreeFile Open "Test.bmp" For Binary Access Write As #F Put #F,,BM Put #F,,PixelData() ' Error in this line - PixelData() Close #F
How can I fix this?Code: Select all
Untitled.bas(69) warning 50(2): Suspicious address expression passed to BYREF parameter, at parameter 3 Aborting due to runtime error 12 ("segmentation violation" signal)
Re: Freebasic 1.20.0 Development
It seems the 64bit works but 32bit still failed when build new mff.Xusinboy Bekchanov wrote: ↑Apr 08, 2024 11:23Hello, this problem has been resolved with the new compiler dated 07.04.2024, thank you.Xusinboy Bekchanov wrote: ↑Apr 05, 2024 14:57 Hello, when compiling this example using compiler 1.20.0 dated 04/01/2024, it gives the error:Code: Select all
#include once "windows.bi" Private Function GetRed(FColor As Long) As Integer Return CUInt(FColor) And 255 End Function Private Function GetGreen(FColor As Long) As Integer Return CUInt(FColor) Shr 8 And 255 End Function Private Function GetBlue(FColor As Long) As Integer Return CUInt(FColor) Shr 16 And 255 End Function Type RGB3 Field = 1 G As Byte B As Byte R As Byte End Type Type BitmapStruct Field = 1 Identifier As WORD FileSize As DWORD Reserved0 As DWORD bmpDataOffset As DWORD bmpHeaderSize As DWORD bmpWidth As DWORD bmpHeight As DWORD Planes As WORD BitsPerPixel As WORD Compression As DWORD bmpDataSize As DWORD HResolution As DWORD VResolution As DWORD Colors As DWORD ImportantColors As DWORD End Type Static As BitmapStruct BM Dim As Integer F, x, y, Clr, Count = 0 Dim As HDC FDevice Dim As Integer FWidth = 50, FHeight = 50 ReDim PixelData(FWidth * FHeight - 1) As RGB3 For y = FHeight-1 To 0 Step -1 For x = 0 To FWidth - 1 Clr = GetPixel(FDevice,x,y) PixelData(Count).G = GetGreen(Clr) PixelData(Count).R = GetRed(Clr) PixelData(Count).B = GetBlue(Clr) Count += 1 Next x Next y BM.Identifier = 66 + 77 * 256 BM.Reserved0 = 0 BM.bmpDataOffset = 54 BM.bmpHeaderSize = 40 BM.Planes = 1 BM.BitsPerPixel = 24 BM.Compression = 0 BM.HResolution = 0 BM.VResolution = 0 BM.Colors = 0 BM.ImportantColors = 0 BM.bmpWidth = FWidth BM.bmpHeight = FHeight BM.bmpDataSize = FWidth * FHeight * 3 BM.FileSize = BM.bmpDataOffset + BM.bmpDataSize F = FreeFile Open "Test.bmp" For Binary Access Write As #F Put #F,,BM Put #F,,PixelData() ' Error in this line - PixelData() Close #F
How can I fix this?Code: Select all
Untitled.bas(69) warning 50(2): Suspicious address expression passed to BYREF parameter, at parameter 3 Aborting due to runtime error 12 ("segmentation violation" signal)
Re: Freebasic 1.20.0 Development
No, it alsp works.
(have you refreshed your mff page?)
Check your fbc.exe:
(have you refreshed your mff page?)
Check your fbc.exe:
Code: Select all
Print __FB_BUILD_DATE_ISO__
2024-04-07
Re: Freebasic 1.20.0 Development
fxm wrote: ↑Apr 08, 2024 14:20 No, it alsp works.
(have you refreshed your mff page?)
Check your fbc.exe:Code: Select all
Print __FB_BUILD_DATE_ISO__
2024-04-07
You are so right.Thanks a lot.
Re: Freebasic 1.20.0 Development
An update of some things being worked on...
If we are going to make changes to string handling, (for example as discussed in this topic, temporary descriptors), I would like a way to conveniently profile the performance differences instead of having to write custom timing tests.
Past couple of weekends have been working on bringing back lillo's profiler from the early days. I had asked lillo back in 2008 if can remove it because the profiler had several issues and gprof looked like it would be great - and it was - for a time. So profiling was replaced by gprof support. Nowadays my experience is that mingw-w64 gprof doesn't work well if at all on windows - it doesn't crash, it just doesn't capture profiling. fbc may be at fault here too since there is some target specific details that may be missing.
lillo's profiler uses fb's builtin TIMER for measuring time. It is not most precise, but overall the feature has some good reporting capabilities and should be helpful for longer running algorithms. It is not suitable for analyzing cycle counts of short duration procedures. Basically I used his profiler as a starting point and have rewritten many parts to work with current fbc. Seems to work ok with gcc and gas64 backends. Two problems left to solve, 1) a crash in some kinds of code compiled with gas backend, 2) very slow report processing and generation when there are many procedures. One of the algorithms for processing uses a linear search for procedures names so is geometrically slower as the number of procedures increases.
SARG has been experimenting in gas64 using RDTSC instructions for precise measurements of some kinds of programs. Overhead is low so should be accurate. It's at the stage of just getting it to work, so reporting capability and flexibility is minimal / none.
Anyway, even though all of these profiling options have limitations, my goal is something (anything) that can be used to profile fbc itself. My concern is that over the last 2 years fbc takes about 20% longer now to execute the continuous integration testing on Travis-CI. Maybe our time share is throttled, maybe the test-suite is just larger, maybe fbc just does more stuff now, but most concerning is there is a some choke point on some fbc algorithm, or that new changes will just make overall performance slower.
If we are going to make changes to string handling, (for example as discussed in this topic, temporary descriptors), I would like a way to conveniently profile the performance differences instead of having to write custom timing tests.
Past couple of weekends have been working on bringing back lillo's profiler from the early days. I had asked lillo back in 2008 if can remove it because the profiler had several issues and gprof looked like it would be great - and it was - for a time. So profiling was replaced by gprof support. Nowadays my experience is that mingw-w64 gprof doesn't work well if at all on windows - it doesn't crash, it just doesn't capture profiling. fbc may be at fault here too since there is some target specific details that may be missing.
lillo's profiler uses fb's builtin TIMER for measuring time. It is not most precise, but overall the feature has some good reporting capabilities and should be helpful for longer running algorithms. It is not suitable for analyzing cycle counts of short duration procedures. Basically I used his profiler as a starting point and have rewritten many parts to work with current fbc. Seems to work ok with gcc and gas64 backends. Two problems left to solve, 1) a crash in some kinds of code compiled with gas backend, 2) very slow report processing and generation when there are many procedures. One of the algorithms for processing uses a linear search for procedures names so is geometrically slower as the number of procedures increases.
SARG has been experimenting in gas64 using RDTSC instructions for precise measurements of some kinds of programs. Overhead is low so should be accurate. It's at the stage of just getting it to work, so reporting capability and flexibility is minimal / none.
Anyway, even though all of these profiling options have limitations, my goal is something (anything) that can be used to profile fbc itself. My concern is that over the last 2 years fbc takes about 20% longer now to execute the continuous integration testing on Travis-CI. Maybe our time share is throttled, maybe the test-suite is just larger, maybe fbc just does more stuff now, but most concerning is there is a some choke point on some fbc algorithm, or that new changes will just make overall performance slower.
-
- Posts: 539
- Joined: Dec 02, 2011 22:51
- Location: France
Re: Freebasic 1.20.0 Development
Just Feedback (sorry for not being able to give more). While developing lzle, at one point I faced a 15%-25% performance drop. The problem was the following: resistance to memory fragmentation over time. This may seem paradoxical, but the less memory was exposed to the risk of fragmentation, the worse the performance. The reason is simple: the more memory operations protect against the risk of fragmentation, the more elementary they are, and therefore numerous, and therefore resource-consuming. The level of tolerance for fragmentation depends on the actual needs of the program and there is therefore no optimal generic solution on this point, but as soon as data or algorithmic structures offering a greater level of atomicity are added/used/tested, it It is logical to observe a slight loss of performance, the expected counterparts being cleaner code, a more powerful instruction set more in line with conceptual modeling standards and better resistance to the variability of memory cycles.
I won't be able to say if this could explain, at least partially, the problem observed, it would just be one hypothesis among others.
In my case, I had to resolve to admit limits to optimization. In FB, I find it very difficult to see how the expected level of fragmentation tolerance could become, if necessary, a property of the syntax encapsulation of certain objects, but above all transparent from an algorithmic point of view... for a hypothetical gain of.. 10%?..
I won't be able to say if this could explain, at least partially, the problem observed, it would just be one hypothesis among others.
In my case, I had to resolve to admit limits to optimization. In FB, I find it very difficult to see how the expected level of fragmentation tolerance could become, if necessary, a property of the syntax encapsulation of certain objects, but above all transparent from an algorithmic point of view... for a hypothetical gain of.. 10%?..
Re: Freebasic 1.20.0 Development
Our main benchmark is compiler self compilation. It is a large program that does something, and therefore better indicative of real world performances differences than micro benchmarks.
However, the problem is that as the compiler mutates in time, timings can't be compared over longer periods, but this kind of benchmarking is used to test the impact of e.g. a branch to merge in.
However, the problem is that as the compiler mutates in time, timings can't be compared over longer periods, but this kind of benchmarking is used to test the impact of e.g. a branch to merge in.
Re: Freebasic 1.20.0 Development
Also remember to take into account at the same time the bug report:
#821 LEN() and SIZEOF() should not be allowed when used with bitfields
Example grouping together all these cases of inconsistent LEN/SIZEOF values, highlighted by ">>> value <<<":
Code: Select all
#if __FB_VERSION__ >= "1.09.0"
#include once "fbc-int/array.bi"
#endif
Scope
Dim As Zstring * 5 s(1 To...) = {"1", "12", "123"}
Print "SIZED ARRAY : 'Dim As Zstring * 5 s(1 To...) = {""1"", ""12"", ""123""}'"
Print "Len(s(1))", "Len(s(2))", "Len(s(3))", "Len(s)", "Len(Typeof(s))"
Print Len(s(1)), Len(s(2)), Len(s(3)), ">>> " & Len(s) & " <<<", Len(Typeof(s))
Print "Sizeof(s(1))", "Sizeof(s(2))", "Sizeof(s(3))", "Sizeof(s)", "Sizeof(Typeof(s))"
Print Sizeof(s(1)), Sizeof(s(2)), Sizeof(s(3)), ">>> " & Sizeof(s) & " <<<", Sizeof(Typeof(s))
#if __FB_VERSION__ >= "1.09.0"
Print "FB.Arraylen(s())"
Print FB.Arraylen(s())
Print "FB.Arraysize(s())"
Print FB.Arraysize(s())
#endif
Print
End Scope
Scope
Dim As Zstring * 5 s()
Print "UNSIZED ARRAY : 'Dim As Zstring * 5 s()'"
Print "Len(s(0))", "Len(s)", "Len(Typeof(s))"
Print Len(s(0)), ">>> " & Len(s) & " <<<", Len(Typeof(s))
Print "Sizeof(s(0))", "Sizeof(s)", "Sizeof(Typeof(s))"
Print Sizeof(s(0)), ">>> " & Sizeof(s) & " <<<", Sizeof(Typeof(s))
#if __FB_VERSION__ >= "1.09.0"
Print "FB.Arraylen(s())"
Print FB.Arraylen(s())
Print "FB.Arraysize(s())"
Print FB.Arraysize(s())
#endif
Print
End Scope
Type UDT
As Ushort a : 1 = 0
As Ubyte b : 1 = 0
End Type
Print "BITFIELD : 'Type UDT : As Ushort a : 1 = 0 : As Ubyte b : 1 = 0 : End Type'"
Print "Len(UDT)", "Sizeof(UDT)"
Print Len(UDT), Sizeof(UDT)
Print "Len(UDT.a)", "Sizeof(UDT.a)"
Print ">>> " & Len(UDT.a) & " <<<", ">>> " & Sizeof(UDT.a) & " <<<"
Print "Len(UDT.b)", "Sizeof(UDT.b)"
Print ">>> " & Len(UDT.b) & " <<<", ">>> " & Sizeof(UDT.b) & " <<<"
Print "Len(UDT().a)", "Sizeof(UDT().a)"
Print ">>> " & Len(UDT().a) & " <<<", ">>> " & Sizeof(UDT().a) & " <<<"
Print "Len(UDT().b)", "Sizeof(UDT().b)"
Print ">>> " & Len(UDT().b) & " <<<", ">>> " & Sizeof(UDT().b) & " <<<"
Print
Sleep
Code: Select all
Sizeof(s(1)) Sizeof(s(2)) Sizeof(s(3)) Sizeof(s) Sizeof(Typeof(s))
5 5 5 >>> 5 <<< 5
FB.Arraylen(s())
3
FB.Arraysize(s())
15
UNSIZED ARRAY : 'Dim As Zstring * 5 s()'
Len(s(0)) Len(s) Len(Typeof(s))
0 >>> 5 <<< 5
Sizeof(s(0)) Sizeof(s) Sizeof(Typeof(s))
5 >>> 5 <<< 5
FB.Arraylen(s())
0
FB.Arraysize(s())
0
BITFIELD : 'Type UDT : As Ushort a : 1 = 0 : As Ubyte b : 1 = 0 : End Type'
Len(UDT) Sizeof(UDT)
2 2
Len(UDT.a) Sizeof(UDT.a)
>>> 2 <<< >>> 2 <<<
Len(UDT.b) Sizeof(UDT.b)
>>> 2 <<< >>> 2 <<<
Len(UDT().a) Sizeof(UDT().a)
>>> 4 <<< >>> 4 <<<
Len(UDT().b) Sizeof(UDT().b)
>>> 4 <<< >>> 4 <<<
[edit]
Bug report filled in:
#1004 LEN() and SIZEOF() should not be allowed when used with an array name without index
Last edited by fxm on May 05, 2024 11:22, edited 1 time in total.
Reason: Added the bug report link.
Reason: Added the bug report link.
Re: Freebasic 1.20.0 Development
I am interested in how many times certain functions are called and if those functions are called orders of magnitude more than any other function; if they are then is probably worth the effort to invest time to determine if the function can be reworked or optimized.Lost Zergling wrote: ↑Apr 15, 2024 11:41 I had to resolve to admit limits to optimization. In FB, I find it very difficult to see how the expected level of fragmentation tolerance could become, if necessary, a property of the syntax encapsulation of certain objects, but above all transparent from an algorithmic point of view... for a hypothetical gain of.. 10%?..
Problem for me is that the compiler has never been bench-marked far as I know. And comparisons before and after a major changes never examined. So it's not really something that has been tracked in any regard.marcov wrote: ↑Apr 15, 2024 12:21 Our main benchmark is compiler self compilation. It is a large program that does something, and therefore better indicative of real world performances differences than micro benchmarks.
However, the problem is that as the compiler mutates in time, timings can't be compared over longer periods, but this kind of benchmarking is used to test the impact of e.g. a branch to merge in.
All I have right now is a feeling based on build times as recorded by the CI host. I would have to predict that the build time is dominated by opening files on the file system and executing child process - which I am hoping is the primary cause for longer build times and is a result of more test files and more source files having been added over the years.